SAGE Journal Articles

Tip: Click on each link to expand and view the content. Click again to collapse.

Chapter 1: Research, Biases in Thinking, and the Role of Theories

Click on the following links. Please note these will open in a new window.

Article 1: Dale, S. (2015). Heuristics and biases: The science of decision-making. Business Information Review, 32(2), 93-99.

Summary: The article discusses Kahneman’s theory of dual process, system 1 and system 2 cognitions as well as commonly used heuristics including the representativeness and availability heuristic. The article further discusses the psychological basis of heuristics as well as errors that can result from their use.

Learning Objective: Heuristics (Representativeness and Availability)

Questions to Consider

  1. Why are heuristics considered a part of System 1 thinking?
  2. Describe two additional heuristics from the article and how they can produce errors in thinking.
  3. Compare the results from the survey that assessed the most feared ways to die with the actual leading causes of death. Why are these results so different?
     

Article 2: Knobloch-Westerwick, S., & Kleinman, S. B. (2012). Preelection selective exposure: Confirmation bias versus informational utility. Communication Research, 39(2), 170-193.

Summary: The article discusses differences between the use of confirmation bias and informational utility in selective exposure to internet articles preceding an election. The authors hypothesized that when a person’s political party was expected to win, they would use a confirmation bias in selecting which articles to attend to but if their political party was not expected to win, they would use informational utility in selecting articles to attend to. Consistent with their predictions, since Democrats were expected to win, liberals used a confirmation bias in selecting articles to attend to, whereas conservatives used informational utility in focusing on articles that were consistent with the opposition’s stance on issues. The article promotes discussion on the situations that may lead to confirmation bias compared to informational utility

Learning Objective: Humans want to Confirm Hypotheses

Questions to Consider

  1. What is the difference between confirmation bias and informational utility?
  2. In their article, what do the authors posit as the cause of people using a confirmation bias compared to informational utility?
  3. Consider more recent elections since the 2008 election. What predictions can be made regarding which political leaning was likely to use a confirmation bias versus informational utility?
     

Article 3: Donovan, S. M., O’Rourke, M., & Looney, C. (2015). Your hypothesis or mine? Terminological and conceptual variation across disciplines. SAGE Open, 5(2), 1-13.

Summary: The authors discuss the challenges of the use of scientific terms among different disciplines participating in cross-disciplinary research (CDR). Using teambuilding workshops, the authors examined how different definitions and assumptions about the definition and use of hypotheses in research can lead to problems in CDR efforts.

Learning Objective: The Difference Between a Law, a Theory, and a Hypothesis

Questions to Consider

  1. What is CDR? Why it is a particularly important type of research at universities and institutions?
  2. What is the definition of a hypothesis in psychology? How are they used in psychology?
  3. How can the use and definitions of hypotheses in different disciplines create confusion among people these different disciplines? What tips could be offered in teams of researchers from different disciplines that are conducting CDR?

Chapter 2: Generating and Shaping Ideas: Tradition and Innovation

Click on the following links. Please note these will open in a new window.

Article 1: Foster, J. G., Rzhetsky, A., & Evans, J. A. (2015). Tradition and innovation in scientists’ research strategies. American Sociological Review, 80(5), 875-908.

Summary: The authors discuss the factors that affect whether a scientist chooses to engage in innovative research compared to traditional areas of knowledge. In particular, they examine the trade-offs between innovative research that may be more highly cited but carries the risk of going unpublished versus traditional research that is not as highly cited but is more certain of publication.

Learning Objective: Tradition and Innovation

Questions to Consider

  1. What do the authors identify as the “essential tension” in research? How can this tension apply to psychology?
  2. What factors may influence a researcher to choose a more traditional research strategy? What factors may influence a researcher to choose a more innovative strategy?
  3. What suggestions do the authors provide for encouraging innovation? Do you have additional ideas for encouraging innovation?
     

Article 2: Drummond, A. (1994). Writing a research article. British Journal of Occupational Therapy, 57(8), 303-305.

Summary: The author discusses tips for writing a research article and what should be included in the major sections when writing or reading a research article.

Learning Objective: Reading an Article

Questions to Consider

  1. What are the major sections of a research article? What purpose does each section serve?
  2. What is the difference between a results and a discussion section?
     

Article 3: Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology research: How often do they really occur? Perspectives on Psychological Science, 7, 537-542.

Summary: The authors report a review of the percentage of psychological articles that have been replicated. They find that a relatively low percent of studies published in top journals are replications of previous studies that have been conducted.

Learning Objective: Journal Publication Practices, File Drawer Effect

Questions to Consider

  1. What percent of articles are replications of previous research? Is this surprising given the trait of replicability in science?
  2. Is it concerning that successful replications are more likely to occur when it is the authors of the original study? Why or why not?
  3. What factors may prevent a researcher from conducting a replication study?

Chapter 3: Research Design Approaches and Issues: An Overview

Click on the following links. Please note these will open in a new window.

Article 1: Smith, C. J. (2012). Type I and Type II errors: What are they and why do they matter? Phlebology, 27, 199-200.

Summary: The author provides a description of Type I and Type II errors and how they can be controlled by the researcher.

Learning Objective: Type I vs. Type II Errors

Questions to Consider

  1. What is a Type I error and a Type II error? Which error is more problematic for psychological research?
  2. What can a researcher do to control or prevent making a Type I and Type II error?
     

Article 2: Sink, C. A., & Mvududu, N. H. (2010). Statistical power, sampling, and effect sizes: Three keys to research relevancy. Counseling Outcome Research and Evaluation, 1(2), 1-18.

Summary: The authors discuss the relationship between statistical power and effect sizes and sampling in demonstrating practical significance in clinical and counseling research.

Learning Objective: Relationships Among Sample Size, Power, and Effect Size

Questions to Consider

  1. What is your definition of sample size, power, and effect size?
  2. How are effect size and power related?
  3. What role does sample size play in effect size and power?
     

Article 3: Werfel, S. H. (2017). Voting and civic engagement: Results from an online field experiment. Research and Politics, 1-3.

Summary: The author conducted a study investigating whether voting during an election predicted greater civic engagement. Individuals who voted in the 2016 primary election were more likely to open a survey than those who did not vote.

Learning Objective: Where Research Takes Place

Questions to Consider

  1. What makes this a field study compared to a laboratory study?
  2. What other type of research design could this be classified as based on the groups compared in the study?
  3. What limitations exist in this study with regard to causation?

Chapter 4: Ethics and the Institutional Review Board (IRB) Process

Click on the following links. Please note these will open in a new window.

Article 1: Griggs, R. A. (2017). Milgram’s obedience study: A contentious classic reinterpreted. Teaching of Psychology, 44(1), 32-37.

Summary: The author discusses new and past criticisms of Milgram’s obedience study ranging from the initial concerns about the ethics of the study to more recent concerns about “file drawer issues” and unpublished data that suggested people did not obey the experimenter. Additionally, the author presents a new interpretation of the findings in terms of engaged fellowship rather than obeying an authority.

Learning Objective: Chapter Overview

Questions to Consider

  1. What are some of the old criticisms of Milgram’s study related to the ethics of his research? What are some of the new criticisms of his studies? How do these modern and past criticisms both reflect concerns about the ethics of research?
  2. What are the similarities and differences between past interpretations of his findings and the reinterpretation of his findings cited by Griggs?
  3. Milgram’s study is consistently cited in social psychology textbooks as evidence for obedience. Should we continue to cite research thought to be unethical as evidence for a phenomenon or should the concerns about ethics outweigh its benefit to society?
     

Article 2: Klitzman, R. (2013). How good does the science have to be in proposals submitted to Institutional Review Boards? An interview study of Institutional Review Board personnel. Clinical Trials, 10, 761-766.

Summary: The author conducted interviews with IRBs around the country to assess how IRBs make decisions and the conflicts they feel in altering scientific proposals. It discusses IRB concerns about maximizing benefits of a study, good enough versus perfect studies, and concerns about altering studies approved by other agencies.

Learning Objective: Nuts and Bolts of IRBs

Questions to Consider

  1. What does it mean that IRBs feel that “the social and thus scientific benefits be maximized?”
  2. What does it mean that a study is “good enough” versus “good as possible”?
  3. How do IRBs feel about grant-funded studies at major grant agencies like the NIH and NSF that have been approved?
     

Article 3: Barrera, D., & Simpson, B. (2012). Much ado about deception: Consequences of deceiving research participants in the social sciences. Sociological Methods and Research, 41(3), 383-413.

Summary: The authors describe two experiments in which participants were either deceived or not deceived and observed the effects of deception on subsequent beliefs about being deceived and behavior. The results of the studies found that the validity of the experimental results is not affected by the deception.

Learning Objective: Deception and Alternatives

Questions to Consider

  1. The authors point out that arguments about deception focus less on ethics and more on its pragmatic use. What are the ethical concerns about deception? What are the pragmatic concerns? Why do you think there is not a larger ethical concern?
  2. How did deception affect the beliefs of participants about deception? How did it affect their behaviors?
  3. Based on their results, would you use deception in your research? Would you advise other people to use deceptions? Why or why not?

Chapter 5: Measures and Survey Research Tools

Click on the following links. Please note these will open in a new window.

Article 1: Beckenbach, J., Schmidt, E., & Reardon, R. (2009). The interpersonal relationship resolution scale: A reliability and validity scale. The Family Journal: Counseling and Therapy for Couples and Families, 17(4), 335-341.

Summary: The authors report the results of a study assessing the reliability and validity of the Interpersonal Relationship Resolution Scale (IRRS), which assesses individual perceptions of violations in a relationship that resulted in interpersonal injury and their willingness to forgive such violations. The authors report on a study used to assess various forms of reliability and validity of the scale.

Learning Objective: Qualities of Measures: Reliability and Validity

Questions to Consider

  1. Define reliability and validity in your own words?
  2. What types of reliability did the authors test for the IRRS? What were their results for these forms of reliability?
  3. What types of validity did the authors test for the IRRS? What were the results for these types of validity?
     

Article 2: Rafilson, F., & Sison, R. (1996). Seven criterion-related validity studies conducted with the National Police Officer Selection Test (POST). Psychological Reports, 78, 163-176.

Summary: The authors report on seven studies conducted to test the criterion-related validity of the National Police Officer Selection Test (POST), which is a standardized screening test used in the selection of potential police officers. The authors correlated scores from the POST to a variety of scores obtained during police officer training and work performance.

Learning Objectives: Validity: Content, Face, Criterion-Related, Content.

Questions to Consider

  1. In your own words, what is criterion-related validity? Why is it an important form of validity to test for the POST?
  2. What are the measures that the POST is correlated with to test criterion-related validity? What are the results of these tests?
  3. What would be a criticism of criterion-related validity for the POST? (HINT: The only police officers that can have their training or work performance measured are those that made it through the POST screening).
     

Article 3: Holtgraves, T. (2004). Social desirability and self-reports: Testing models of social desirable responding. Personality and Social Psychology Bulletin, 30, 161-172.

Summary: The author reports the results of three studies designed to examine the process and conditions under which people identify self-reporting measures in a socially desirable way. In the three studies, the author manipulated the degree to which a participant may respond in a socially desirable way. The results found that response times increase when a participant is responding in a more socially desirable way. The results suggest that socially desirable responses are edited responses to a trait or behavior.

Learning Objective: Social Desirability Concerns

Questions to Consider

  1. Define social desirability in your own words. Why is social desirability a concern in survey research?
  2. Briefly describe how the author manipulated social desirability conditions in each of the three studies.
  3. How did the author define social desirable responses? What was the interpretation for this finding?

Chapter 6: Correlational and Qualitative Research

Click on the following links. Please note these will open in a new window.

Article 1: Riketta, M. (2004). Does social desirability inflate the correlation between self-esteem and anxiety? Psychological Reports, 94, 1232-1234.

Summary: The author reports the results of a study that examines whether people’s feelings of social desirability explains the relationship between self-esteem and anxiety. The researcher found that including a measure of social desirability reduced the correlation between self-esteem and anxiety but that this reduction, while significant, may be too small to be of concern in research.

Learning Objective: Statistics for Correlation

Questions to Consider

  1. What is the correlation between self-esteem and anxiety that has typically been found in research? Interpret this correlation in terms of its strength and direction.
     
  2. What is the relationship between self-esteem and anxiety after including social desirability? Interpret this correlation in terms of its strength and direction.
     
  3. What is the interpretation for why social desirability decreases the relationship between self-esteem and anxiety?
     

Article 2: Fisher, B. S. (2009). The effects of survey question wording on rape estimates. Violence Against Women, 15(2), 133-147.

Summary: The author reports the results of a quasi-experimental study that examined how the wording of questions regarding incidences of rape can lead to different estimates of percent of women who have been raped nationally and at universities. The author finds that the wording results in significant differences in estimates of sexual assaults.

Learning Objective: Questions About Groups: Quasi-Experimental Designs

Questions to Consider

  1. What are the two groups in this study? Why do these groups make this a quasi-experimental design?
  2. How does the wording of the question affect the results of rape estimates?
  3. Can this study conclude that the groups caused differences in the estimates? Why or why not?
     

Article 3: Mazzola, J. J., Walker, E. J., Shockley, K. M., & Spector, P. E. (2011). Examining stress in graduate assistants: Combining qualitative and quantitative survey methods. Journal of Mixed Methods Research, 5, 198-211.

Summary: The authors conducted a study of graduate assistants in which they were asked about stressors they experienced and analyzed measures of stress related to a variety of stressors. They found that graduate students who reported specific types of stress scored higher on quantitative measures of those stressors. They suggest that using both qualitative and quantitative methods for assessing stress may be better than using one method only.

Learning Objectives: Where Qualitative and Quantitative Meet

Questions to Consider

  1. In your own words, define qualitative and quantitative research methods?
  2. What type of qualitative methods did the authors use to examine stress in graduate assistants?
  3. What type of quantitative methods did the authors use to examine stress in graduate assistants?

Chapter 7: Experimental Approaches: Between Subjects Designs

Click on the following links. Please note these will open in a new window.

Article 1: Bernstein, M. J., Young, S. G., Brown, C. M., Sacco, D. F., & Claypool, H. M. (2008). Adaptive responses to social exclusion: Social rejection improves detection of real and fake smiles. Psychological Science, 19(10), 981-983.

Summary: The authors examined whether experiences of rejection led to better discrimination between Duchenne (real) and non-Duchenne (fake) smiles. Participants wrote about a time they were rejected, included, or what they did that morning and then decided whether smiles were fake or real. The results revealed that experiences of rejection increased the ability to recognize real and fake smiles better than experiences of inclusion or the control.

Learning Objectives: Common Types of Between Subjects Designs

Questions to Consider

  1. What is the independent variable in the study? How many groups of the independent variable are there?
  2. What are the advantages of conducting the study as a between subjects design?
  3. What are the disadvantages of conducting the study as a between subjects design?
     

Article 2: Taylor, S. E., Welch, W. T., Kim, H. S., & Sherman, D. K. (2007). Cultural differences in the impact of social support on psychological and biological stress responses. Psychological Science, 18 (9), 831-837.

Summary: The authors examined the difference between desire for implicit versus explicit social support among Asians and Asian Americans. Participants either wrote letters asking for explicit support from a loved one, wrote about a valued social group, or wrote about campus landmarks. Participants who wrote about a valued social group were less distressed in a subsequent stressful exercise.

Learning Objectives: Multiple Comparisons: Planned and Unplanned Comparisons

Questions to Consider

  1. What are the independent variables in the study? What are the groups of the independent variable(s)?
  2. The author conducted planned comparisons to examine the interactions. Why did the authors choose this type of analysis?
  3. How are planned comparisons different than unplanned comparisons?
     

Article 3: Coker, A. L., Cook-Craig, P. G., Williams, C. M., Fisher, B. S., Clear, E. R., Garcia, L. S., & Hegge, L. M. (2011). Evaluation of green dot: An active bystander intervention to reduce sexual violence on college campuses. Violence Against Women, 17(6), 777-796.

Summary: The authors examine the effects of a Green Dot intervention program on bystander intervention behaviors with regards to sexual violence. Participants surveyed had heard a Green Dot speech and some had also received bystander intervention training. The researchers assessed student’s beliefs about rape, acceptance of general dating violence, and self-reported or observed bystander behaviors. Results showed that individuals who received the training had less belief in rape myths and self-reported or were observed engaging in bystander intervention behaviors.

Learning Objectives: Multiple DVs in Research Designs

Questions to consider

  1. What are the dependent variables in the study? How are they related to bystander interventions?
  2. Why did the researcher use a MANOVA compared to an ANOVA?
  3. What are the benefits of using the MANOVA?

Chapter 8: Within, Mixed, Pre–Post Experimental, and Specialized Correlational Designs

Click on the following links. Please note these will open in a new window.

Article 1: Keown, C. F. (1983). Comparison of between-subjects and within-subjects designs in assessing perceptions of risk. Psychological Reports, 53, 655-661.

Summary: The author conducted a study of drug preference in which some participants rated all five drugs that varied in side effects and frequency of occurrence or rated one drug only. The researcher found that the drugs rated at the same time were rated higher than rating one drug at a time.

Learning Objective: Within Subjects Designs: Overview

Questions to Consider

  1. How did the researcher create the study as a within subjects design? As a between subjects design?
  2. What is the difference between a within subjects and between subjects design?
  3. What are the advantages and disadvantages of a within subjects design? Did the researcher take steps to control for the disadvantages? How?
     

Article 2: Bonanno, G. A., Papa, A., Lalande, K., Westphal, M., Coifman, K. (2004). The importance of being flexible: The ability to both enhance and suppress emotional expression predicts long-term adjustment. Psychological Science, 15, 482-487.

Summary: The authors had participants perform a task that enhanced emotional expressions, suppressed emotional expressions, or behaved normally. They then tested whether their performance predicted adjustment across two years of college. Participants who were able to express or suppress the expression of emotion adjusted better by the end of two years than participants who were less able to do so. Results supported the flexibility of emotional expression as a better avenue for adjustment.

Learning Objective: Types of Within Subjects Designs

Questions to Consider

  1. What was the within subjects variable in this study? How did they manipulate it?
  2. Did the researchers control the carry over effects of the independent variable?
     

Article 3: Hiskey, S., & Troop, N. A. (2002). Online longitudinal survey research: Viability and participation. Social Science Computer Review, 20(3), 250-259.

Summary: The authors examined the viability of using online surveys for longitudinal research of people having experienced trauma. Participants who signed up to participate completed three waves of surveys, one at sign up, one 3 months later and one 6 months later. The results suggest that online surveys for longitudinal studies are a valid approach to conducting repeated measures research.

Learning Objectives: Longitudinal Designs: Advantages and Disadvantages

Questions to Consider

  1. What is a longitudinal design? What features of the study illustrate this type of design?
  2. What did the researchers find? What do the results suggest about using online research for longitudinal designs?
  3. What are the advantages and disadvantages of longitudinal designs? How do the advantages and disadvantages show up in the research?

Chapter 9: Recruiting Participants

Click on the following links. Please note these will open in a new window.

Article 1: Klein, K., & Cheuvront, B. (1990). The subject-experimenter contract: A reexamination of subject pool contamination. Teaching of Psychology, 17(3), 166-169.

Summary: The authors conducted three experiments to test whether instructions about confidentiality increase participant’s disclosure. Across the three studies, disclosure of confidentiality increased as requirements of confidentiality increased, suggesting that disclosure of confidential details and information may be a problem for subject pools.

Learning Objectives: Subject Pools: Characteristics, Software, Practical Issues

Questions to Consider

  1. What is confidentiality in a study? Why is it important for the researchers and the participants?
  2. What did the researchers find happens with disclosure of confidential information across the three studies?
  3. What do their results suggest about confidentiality in research studies?
     

Article 2: Brickman Bhutta, C. (2012). Not by the book: Facebook as a sampling frame. Sociological Methods and Research, 41(1), 57-88.

Summary: The author examines the usefulness of snowball samples to reach subpopulations through social networking sites. The author finds that this sampling method while disproportionately female, young, educated, and actively religious, produced similar results to national polls.

Learning Objectives: Types of Sampling

Questions to Consider

  1. What is a snowball sample? How is it different than other sampling types?
  2. What concerns about representativeness exist using a snowball sample?
  3. How can a researcher effectively use snowball samples in social media sites?
     

Article 3: Buhrmester, M., Kwang, T., Gosling, S. D. (2011). Amazon’s mechanical turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3-5.

Summary: The authors discuss the utility and differences between a traditional participant pool and Amazon Mechanical Turk (MTurk). Their analysis found that MTurk participants were slightly more diverse than the typical college sample, that participation is affected by the length of the task and compensation, that data quality is not undermined by compensation rates, and that data appears to be as reliable as that obtained via traditional samples.

Learning Objectives: Amazon Mechanical Turk and Online Paid Panels

Questions to Consider

  1. What are MTurk and other online participant recruitment sites? How are they different than the normal college sample?
  2. What advantages do these recruitment sites offer? What are some disadvantages?
  3. How should these sites be used in psychological science?

Chapter 10: Organizing Data and Analyzing Results

Click on the following links. Please note these will open in a new window.

Article 1: Newman, D. A. (2014). Missing data: Five practical guidelines. Organizational Research Methods, 17(4),372-411.

Summary: The author discusses issues related to missing data in research. The author focuses on the types of missing data that exist, what leads to missing data, statistical issues that arise from missing data, and choices that a researcher must make to deal with missing data. The author concludes that social sciences often choose methods that are more prone to bias and error and provides guidelines for better handling of missing data.

Learning Objectives: Missing Data: Points of View and Choices

Questions to Consider

  1. What is missing data? What are the reasons that a data file can have missing data?
  2. What are the differences between missing completely at random, missing at random, and missing not at random?
  3. What guidelines should a researcher consider when deciding how to handle missing data?
     

Article 2: Culpepper, S. A., & Aguinis, H. (2011). R is for revolution: A cutting-edge, free, open source statistical package. Organizational Research Methods, 14(4), 735-740.

Summary: The authors discuss the advantages and disadvantages of R as a statistical package for analyzing data. Advantages include its ability to conduct multiple types of analyses used in the behavioral sciences, cost, continual updates, and ability to create visual graphs. Disadvantages include limited supporting documentation, programming language, and handling missing data. The authors conclude that R is a viable alternative with growing popularity.

Learning Objectives: Other Statistical Software

Questions to Consider

  1. What is R? How does it compare to SPSS?
  2. What advantages does R offer over SPSS? What disadvantages does R have compared to SPSS?
  3. How do you think the cost of statistical software impacts the choices individuals and universities make in the future? Should faculty and students be involved in these decisions?
     

Article 3: Starbuck, W. H. (2016). 60th anniversary essay: How journals could improve research practices in social science. Administrative Science Quarterly, 61(2), 165-183.

Summary: The author discusses ways to improve the evaluation of manuscripts for publication. Special attention is given to problems in the prevalent methodology including HARKing and p-hacking. The author provides guidelines for improving editorial decisions in this context.

Learning Objectives: Opportunistic Biases and “going fishing”

Questions to Consider

  1. What is HARKing? What is p-hacking?
  2. Why are HARKing and p-hacking a problem for editorial decisions of journals? Why is it a problem for science in general?
  3. What suggestions does the author make for making editorial decisions that prevent problems of HARKing and p-hacking? What other suggestions would you make?

Chapter 11: Writing and Presenting Your Research

Click on the following links. Please note these will open in a new window.

Article 1: Landau, J. D., Druen, P. B., &Arcuri, J. A. (2002). Methods for helping students avoid plagiarism. Teaching of Psychology, 29(2), 112-115.

Summary: The authors conducted a study where undergraduates were exposed to plagiarism identification and proper paraphrasing skills accompanied by feedback, examples, examples and feedback, or no examples or feedback. Participants in all conditions except the no examples or feedback performed better identifying plagiarism and knowledge of plagiarism.

Learning Objectives: Writing: Avoiding Plagiarism

Questions to Consider

  1. What is plagiarism? What are the consequences of plagiarism?
  2. How can examples of plagiarism along with paraphrasing skills improve recognition and knowledge of plagiarism?
  3. How should faculty approach deliberate versus accidental plagiarism?
     

Article 2: Landrum, R. E. (2013). Writing in APA Style: Faculty perspectives of competence and importance. Psychology of Learning and Teaching, 12(3), 259-265.

Summary: The author discusses the results of a survey of faculty regarding the APA writing skills of students. The author asked beliefs about the importance of 73 writing skills and the performance of students on these skills. The results of the survey found that there is a gap between specific skills that are rated as highly important and student performance on those skills.

Learning Objective: Sections and Formatting

Questions to Consider

  1. List and describe several of the important writing skills identified by the author? How do students perform on these skills?
  2. What are some writing skills identified by the author that you are unfamiliar with and think you may struggle with?
  3. What recommendations would you give professors to help them teach you these writing skills?