Examples

Comparative Research

Ai generator.

Comparative-Research

Comparative research is a method of analysis that involves comparing two or more entities, such as cultures, systems, or social phenomena, to identify similarities, differences, and patterns. Often utilized in Comparative Analysis Essays , this research approach helps uncover insights by analyzing subjects in different contexts. It draws on methods from Descriptive Research , Experimental Research , and can employ both Qualitative Research and Quantitative Research . By combining these techniques, researchers can reveal underlying factors that influence behaviors, policies, or outcomes. Comparative research is essential for developing theories, improving practices, and informing decision-making across various fields, making it a valuable tool for deeper understanding.

What is Comparative Research?

example of comparative study thesis title

Download Comparative research Examples Bundle

Comparative Research Format

When conducting comparative research, it is essential to follow a structured format to ensure clear and systematic analysis. The format typically includes the following components:

Clearly state the subject and focus of the comparison. For example: “A Comparative Study of Education Systems in the US and Finland.”

Introduction

Provide background information on the topic. Explain the purpose of the comparison. State the research question or hypothesis. Briefly mention the entities being compared (e.g., countries, policies, systems).

Literature Review

Summarize existing research relevant to your study. Highlight the key studies that have made similar comparisons. Identify gaps in the research that your study aims to address.

Methodology

Describe the research methods (qualitative, quantitative, or mixed). Specify the criteria for comparison (e.g., economic factors, cultural influences). Explain the sources of data (e.g., surveys, interviews, official reports). Clarify the timeframe and geographic scope of the study.

Entities or Case Studies

Provide detailed descriptions of the entities being compared. Explain the key characteristics of each (e.g., social, political, or economic features).

Criteria for Comparison

Outline the key variables or dimensions being compared (e.g., education systems, healthcare policies, governance structures). Define each criterion and explain why it is important for the comparison.
Compare and contrast the entities based on the identified criteria. Use tables or charts to clearly display similarities and differences. Discuss the patterns , trends , and insights that emerge from the comparison.
Interpret the findings of the analysis. Explain the implications of the similarities or differences. Relate the results to the original research question or hypothesis.
Summarize the key findings of the study. Discuss the broader significance of the comparison. Suggest recommendations or areas for further research.

Comparative Research Example

Title: A Comparative Study of Education Systems in the United States and Finland 1. Introduction This study compares the education systems of the United States and Finland to identify key differences and similarities in structure, teaching methods, and student outcomes. Both countries are known for their unique approaches to education, yet they yield contrasting results in global rankings. The research seeks to answer how differing policies and practices influence educational success in these two nations. 2. Literature Review Previous studies highlight that Finland consistently outperforms the United States in terms of student achievement, particularly in literacy and mathematics, as shown in international assessments like PISA. Researchers attribute Finland’s success to factors such as teacher autonomy, smaller class sizes, and less standardized testing. In contrast, the U.S. system relies heavily on standardized testing, which some argue may hinder creativity and deeper learning. This study builds on these findings by directly comparing key educational policies and their impacts. 3. Methodology This comparative research uses a mixed-method approach. Quantitative data was collected from international student assessments (e.g., PISA scores), while qualitative data came from interviews with educators in both countries. The criteria for comparison include curriculum structure, teacher training, student assessment, and funding models. 4. Entities or Case Studies United States: Known for its decentralized education system, with local governments making key decisions. Education is often characterized by standardized testing and varying funding levels across districts. Finland: A centralized education system where teachers are given high levels of autonomy, and students are not subject to standardized testing until the end of secondary school. 5. Criteria for Comparison Curriculum Structure: The U.S. curriculum is often focused on core subjects like math and reading, with less emphasis on creativity and practical skills. Finland’s curriculum, however, emphasizes holistic development, including life skills and critical thinking. Teacher Training: U.S. teachers typically require a bachelor’s degree and state certification, while Finnish teachers must hold a master’s degree and undergo extensive pedagogical training. Student Assessment: The U.S. relies heavily on standardized tests, while Finland uses teacher-based assessments and focuses on student development rather than rankings. Funding Models: U.S. schools are funded primarily through local taxes, creating disparities in school resources. In contrast, Finland has a more equitable funding model, ensuring all schools have similar resources.   6. Discussion The comparison reveals that Finland’s education system, with its focus on teacher training, equitable funding, and reduced emphasis on standardized testing, contributes to better student outcomes. In contrast, the U.S. system’s reliance on testing and uneven resource distribution may contribute to its lower performance in international assessments. These differences highlight how policy choices can significantly impact educational quality and equity. 7. Conclusion This comparative study demonstrates that Finland’s educational practices, including equitable funding and a focus on teacher autonomy, contribute to its success. The U.S. system, while innovative in some areas, could benefit from rethinking its heavy reliance on standardized testing and addressing funding inequalities. Future research could explore the impact of these factors on long-term student success and well-being.

Comparative Research in Sociology

example of comparative study thesis title

Comparative Research in Research Methodology

example of comparative study thesis title

Comparative Research in Education

example of comparative study thesis title

More Comparative Research Examples and Samples

  • Comparative Research in Quantitative
  • Comparative Research in Psychology
  • Comparative Research in Law
  • Comparative Research of Methods
  • Comparative Research in Political Communication
  • Comparative Effectiveness Research for Medical Devices

Comparative Research Report Template

Comparative Research Report Template

Business Comparative Research Template

Business Comparative Research Template

Comparative Market Research Template

Comparative Market Research Template

Comparative Research in Medical Treatments Example

Comparative Research in Medical Treatments

Casual Comparative Research in DOC

Caasual Comparative Research in DOC

Best Practices in Writing an Essay for Comparative Research in Visual Arts

If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there are inevitable mistakes that students do in essay writing . To avoid those mistakes, follow the following pointers.

1. Compare the Artworks Not the Artists

One of the mistakes that students do when writing a comparative essay is comparing the artists instead of artworks. Unless your instructor asked you to write a biographical essay, focus your writing on the works of the artists that you choose.

2. Consult to Your Instructor

There is broad coverage of information that you can find on the internet for your project. Some students, however, prefer choosing the images randomly. In doing so, you may not create a successful comparative study. Therefore, we recommend you to discuss your selections with your teacher.

3. Avoid Redundancy

It is common for the students to repeat the ideas that they have listed in the comparison part. Keep it in mind that the spaces for this activity have limitations. Thus, it is crucial to reserve each space for more thoroughly debated ideas.

4. Be Minimal

Unless instructed, it would be practical if you only include a few items(artworks). In this way, you can focus on developing well-argued information for your study.

5. Master the Assessment Method and the Goals of the Project

We get it. You are doing this project because your instructor told you so. However, you can make your study more valuable by understanding the goals of doing the project. Know how you can apply this new learning. You should also know the criteria that your teachers use to assess your output. It will give you a chance to maximize the grade that you can get from this project.

Comparing things is one way to know what to improve in various aspects. Whether you are aiming to attain a personal goal or attempting to find a solution to a certain task, you can accomplish it by knowing how to conduct a comparative study. Use this content as a tool to expand your knowledge about this research methodology.

Types of Comparative Research

  • Cross-National Comparative Research Compares different countries to understand variations in policies, behaviors, or social structures (e.g., comparing healthcare systems across nations).
  • Cross-Cultural Comparative Research Focuses on comparing cultural practices, norms, and values between different societies or cultural groups (e.g., comparing attitudes toward education in collectivist vs. individualist cultures).
  • Historical Comparative Research Involves comparing social phenomena or institutions across different historical periods to identify patterns or changes over time (e.g., comparing political systems before and after a revolution).
  • Case-Oriented Comparative Research Focuses on in-depth analysis of a few cases (such as countries, organizations, or groups) to examine differences and similarities in specific contexts (e.g., comparing two educational institutions).
  • Variable-Oriented Comparative Research Seeks to identify relationships between variables across different cases, often using large datasets (e.g., comparing unemployment rates and economic growth across regions).

How to Design Comparative Research?

Comparative research is a method used to examine and contrast two or more cases, groups, or phenomena in order to identify similarities, differences, and patterns. It is widely used in various fields, including education, sociology, political science, and economics.

1. Define Your Research Question : Start by formulating a clear, concise research question. Your question should focus on identifying specific similarities and differences between the cases you’re comparing. Examples include: How do education systems in the U.S. and Finland compare in terms of student outcomes? What are the differences in political participation between urban and rural communities?

2. Select the Units of Comparison : Next, identify the cases or units you will compare. These can be: Countries (e.g., comparing economic growth rates of different nations). Groups of people (e.g., comparing the behavior of teenagers and adults). Events (e.g., comparing political revolutions in different regions). Institutions (e.g., comparing healthcare systems)

3. Determine the Criteria for Comparison : Once you have selected your units, identify the key variables or criteria for comparison. These could include: Quantitative Variables: Numerical data, such as literacy rates, income levels, or population sizes. Qualitative Variables: Descriptive data, such as cultural practices, policies, or historical events.

4. Choose Your Research Methodology : Comparative research can be carried out using different methods depending on the nature of the data and research objectives. The two most common approaches are: Quantitative Research: Involves collecting numerical data and using statistical techniques to identify patterns, differences, or correlations. This method is suitable for large datasets. Qualitative Research: Involves in-depth analysis of non-numerical data such as interviews, case studies, and textual content. This method is appropriate when exploring complex social phenomena or behaviors.

5. Establish a Time Frame : Cross-sectional Comparisons: Look at different cases at a single point in time (e.g., comparing literacy rates in 2020 between two countries). Longitudinal Comparisons: Analyze cases over a period of time (e.g., comparing changes in income inequality over several decades).

6. Collect Data

  • Surveys and questionnaires
  • Existing databases and archives
  • Interviews and focus groups
  • Case studies and historical records

7. Analyze the Data

  • Identifying key similarities and differences between the cases.
  • Looking for patterns that explain these differences.
  • Testing hypotheses based on your research question.

8. Interpret and Discuss the Findings

  • What do the similarities and differences mean?
  • What insights do they provide?
  • How do they answer your research question?
  • Do the findings support existing theories or suggest new hypotheses?

9. Present Your Research

  • Research papers or reports with detailed descriptions of methodology, results, and conclusions.
  • Charts, graphs, or tables to illustrate quantitative findings.
  • Narrative descriptions or case studies for qualitative data.

FAQ’s

How is comparative research different from other research methods.

Unlike other research methods that may focus on a single case or phenomenon, comparative research emphasizes examining multiple cases simultaneously to identify broad trends or unique distinctions. It often involves comparing across different social, political, cultural, or geographical contexts.

How does Comparative Research contribute to policy-making?

Comparative research can inform policy by showing how different approaches yield different results across contexts. For example, comparing healthcare systems may reveal which policies lead to better outcomes, guiding other nations or regions to adopt similar practices.

What is a common pitfall in Comparative Research?

One common pitfall is overgeneralization. Just because a pattern exists in two or more cases does not mean it applies universally. Researchers must ensure that they take contextual differences into account and avoid drawing conclusions that overlook these distinctions.

How can Comparative Research be used in education?

In education, comparative research might be used to compare educational outcomes, teaching methods, and policies across different countries, states, or districts to determine what factors contribute to better student performance or educational equity.

Can Comparative Research be applied to historical studies?

Yes, historical comparative research involves comparing social, political, or economic phenomena across different historical periods to identify patterns of change, continuity, or transformation over time.

Twitter

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

The Savvy Scientist

The Savvy Scientist

Experiences of a London PhD student and beyond

Thesis Title: Examples and Suggestions from a PhD Grad

Graphic of a researcher writing, perhaps a thesis title

When you’re faced with writing up a thesis, choosing a title can often fall to the bottom of the priority list. After all, it’s only a few words. How hard can it be?!

In the grand scheme of things I agree that picking your thesis title shouldn’t warrant that much thought, however my own choice is one of the few regrets I have from my PhD . I therefore think there is value in spending some time considering the options available.

In this post I’ll guide you through how to write your own thesis title and share real-world examples. Although my focus is on the PhD thesis, I’ve also included plenty of thesis title examples for bachelor’s and master’s research projects too.

Hopefully by the end of the post you’ll feel ready to start crafting your own!

Why your thesis title is at least somewhat important

It sounds obvious but your thesis title is the first, and often only, interaction people will have with your thesis. For instance, hiring managers for jobs that you may wish to apply for in the future. Therefore you want to give a good sense of what your research involved from the title.

Many people will list the title of their thesis on their CV, at least for a while after graduating. All of the example titles I’ve shared below came from my repository of academic CVs . I’d say roughly 30% of all the academics on that page list their thesis title, which includes academics all the way up to full professor.

Your thesis title could therefore feature on your CV for your whole career, so it is probably worth a bit of thought!

My suggestions for choosing a good thesis title

  • Make it descriptive of the research so it’s immediately obvious what it is about! Most universities will publish student theses online ( here’s mine! ) and they’re indexed so can be found via Google Scholar etc. Therefore give your thesis a descriptive title so that interested researchers can find it in the future.
  • Don’t get lost in the detail . You want a descriptive title but avoid overly lengthy descriptions of experiments. Unless a certain analytical technique etc was central to your research, I’d suggest by default* to avoid having it in your title. Including certain techniques will make your title, and therefore research, look overly dated, which isn’t ideal for potential job applications after you graduate.
  • The title should tie together the chapters of your thesis. A well-phrased title can do a good job of summarising the overall story of your thesis. Think about each of your research chapters and ensure that the title makes sense for each of them.
  • Be strategic . Certain parts of your work you want to emphasise? Consider making them more prominent in your title. For instance, if you know you want to pivot to a slightly different research area or career path after your PhD, there may be alternative phrasings which describe your work just as well but could be better understood by those in the field you’re moving into. I utilised this a bit in my own title which we’ll come onto shortly.
  • Do your own thing. Having just laid out some suggestions, do make sure you’re personally happy with the title. You get a lot of freedom to choose your title, so use it however you fancy. For example, I’ve known people to use puns in their title, so if that’s what you’re into don’t feel overly constrained.

*This doesn’t always hold true and certainly don’t take my advice if 1) listing something in your title could be a strategic move 2) you love the technique so much that you’re desperate to include it!

Thesis title examples

To help give you some ideas, here are some example thesis titles from Bachelors, Masters and PhD graduates. These all came from the academic CVs listed in my repository here .

Bachelor’s thesis title examples

Hysteresis and Avalanches Paul Jager , 2014 – Medical Imaging – DKFZ Head of ML Research Group –  direct link to Paul’s machine learning academic CV

The bioenergetics of a marine ciliate, Mesodinium rubrum Holly Moeller , 2008 – Ecology & Marine Biology – UC Santa Barbara Assistant Professor –  direct link to Holly’s marine biology academic CV

Functional syntactic analysis of prepositional and causal constructions for a grammatical parser of Russian Ekaterina Kochmar , 2008 – Computer Science – University of Bath Lecturer Assistant Prof –  direct link to Ekaterina’s computer science academic CV

Master’s thesis title examples

Creation of an autonomous impulse response measurement system for rooms and transducers with different methods Guy-Bart Stan , 2000 – Bioengineering – Imperial Professor –  direct link to Guy-Bart’s bioengineering academic CV

Segmentation of Nerve Bundles and Ganglia in Spine MRI using Particle Filters Adrian Vasile Dalca , 2012 – Machine Learning for healthcare – Harvard Assistant Professor & MIT Research Scientist –  direct link to Adrian’s machine learning academic CV

The detection of oil under ice by remote mode conversion of ultrasound Eric Yeatman , 1986 – Electronics – Imperial Professor and Head of Department –  direct link to Eric’s electronics academic CV

Ensemble-Based Learning for Morphological Analysis of German Ekaterina Kochmar , 2010 – Computer Science – University of Bath Lecturer Assistant Prof –  direct link to Ekaterina’s computer science academic CV

VARiD: A Variation Detection Framework for Color-Space and Letter-Space Platforms Adrian Vasile Dalca , 2010 – Machine Learning for healthcare – Harvard Assistant Professor & MIT Research Scientist –  direct link to Adrian’s machine learning academic CV

Identification of a Writer’s Native Language by Error Analysis Ekaterina Kochmar , 2011 – Computer Science – University of Bath Lecturer Assistant Prof –  direct link to Ekaterina’s computer science academic CV

On the economic optimality of marine reserves when fishing damages habitat Holly Moeller , 2010 – Ecology & Marine Biology – UC Santa Barbara Assistant Professor –  direct link to Holly’s marine biology academic CV

Sensitivity Studies for the Time-Dependent CP Violation Measurement in B 0 → K S K S K S at the Belle II-Experiment Paul Jager , 2016 – Medical Imaging – DKFZ Head of ML Research Group –  direct link to Paul’s machine learning academic CV

PhD thesis title examples

Spatio-temporal analysis of three-dimensional real-time ultrasound for quantification of ventricular function Esla Angelini  – Medicine – Imperial Senior Data Scientist –  direct link to Elsa’s medicine academic CV

The role and maintenance of diversity in a multi-partner mutualism: Trees and Ectomycorrhizal Fungi Holly Moeller , 2015 – Ecology & Marine Biology – UC Santa Barbara Assistant Professor –  direct link to Holly’s marine biology academic CV

Bayesian Gaussian processes for sequential prediction, optimisation and quadrature Michael Osborne , 2010 – Machine Learning – Oxford Full Professor –  direct link to Michael’s machine learning academic CV

Global analysis and synthesis of oscillations: a dissipativity approach Guy-Bart Stan , 2005 – Bioengineering – Imperial Professor –  direct link to Guy-Bart’s bioengineering academic CV

Coarse-grained modelling of DNA and DNA self-assembly Thomas Ouldridge , 2011– Bioengineering – Imperial College London Senior Lecturer / Associate Prof –  direct link to Thomas’ bioengineering academic CV

4D tomographic image reconstruction and parametric maps estimation: a model-based strategy for algorithm design using Bayesian inference in Probabilistic Graphical Models (PGM) Michele Scipioni , 2018– Biomedical Engineer – Harvard Postdoctoral Research Fellow –  direct link to Michele’s biomedical engineer academic CV

Error Detection in Content Word Combinations Ekaterina Kochmar , 2016 – Computer Science – University of Bath Lecturer Assistant Prof –  direct link to Ekaterina’s computer science academic CV

Genetic, Clinical and Population Priors for Brain Images Adrian Vasile Dalca , 2016 – Machine Learning for healthcare – Harvard Assistant Professor & MIT Research Scientist –  direct link to Adrian’s machine learning academic CV

Challenges and Opportunities of End-to-End Learning in Medical Image Classification Paul Jager , 2020 – Medical Imaging – DKFZ Head of ML Research Group –  direct link to Paul’s machine learning academic CV

K 2 NiF 4  materials as cathodes for intermediate temperature solid oxide fuel cells Ainara Aguadero , 2006 – Materials Science – Imperial Reader –  direct link to Ainara’s materials science academic CV

Applications of surface plasmons – microscopy and spatial light modulation Eric Yeatman , 1989 – Electronics – Imperial Professor and Head of Department –  direct link to Eric’s electronics academic CV

Geometric Algorithms for Objects in Motion Sorelle Friedler , 2010 – Computer science – Haverford College Associate Professor –  direct link to Sorelle’s computer science academic CV .

Geometrical models, constraints design, information extraction for pathological and healthy medical image Esla Angelini  – Medicine – Imperial Senior Data Scientist –  direct link to Elsa’s medicine academic CV

Why I regret my own choice of PhD thesis title

I should say from the outset that I assembled my thesis in quite a short space of time compared to most people. So I didn’t really spend particularly long on any one section, including the title.

However, my main supervisor even spelled out for me that once the title was submitted to the university it would be permanent. In other words: think wisely about your title.

What I started with

Initially I drafted the title as something like: Three dimensional correlative imaging for cartilage regeneration . Which I thought was nice, catchy and descriptive.

I decided to go for “correlative imaging” because, not only did it describe the experiments well, but it also sounded kind of technical and fitting of a potential pivot into AI. I’m pleased with that bit of the title.

What I ended up with

Before submitting the title to the university (required ahead of the viva), I asked my supervisors for their thoughts.

One of my well intentioned supervisors suggested that, given that my project didn’t involve verifying regenerative quality, I probably shouldn’t state cartilage regeneration . Instead, they suggested, I should state what I was experimenting on (the materials) rather than the overall goal of the research (aid cartilage regeneration efforts).

With this advice I dialled back my choice of wording and the thesis title I went with was:

Three dimensional correlative imaging for measurement of strain in cartilage and cartilage replacement materials

Reading it back now I’m reminder about how less I like it than my initial idea!

I put up basically no resistance to the supervisor’s choice, even though the title sounds so much more boring in my opinion. I just didn’t think much of it at the time. Furthermore, most of my PhD was actually in a technique which is four dimensional (looking at a series of 3D scans over time, hence 4D) which would have sounded way more sciency and fitting of a PhD.

What I wish I’d gone with

If I had the choice again, I’d have gone with:

Four-dimensional correlative imaging for cartilage regeneration

Which, would you believe it, is exactly what it states on my CV…

Does the thesis title really matter?

In all honesty, your choice of thesis title isn’t that important. If you come to regret it, as I do, it’s not the end of the world. There are much more important things in life to worry about.

If you decide at a later stage that you don’t like it you can always describe it in a way that you prefer. For instance, in my CV I describe my PhD as I’d have liked the title to be. I make no claim that it’s actually the title so consider it a bit of creative license.

Given that as your career progresses you may not even refer back to your thesis much, it’s really not worth stressing over. However, if you’re yet to finalise your thesis title I do still think it is worth a bit of thought and hopefully this article has provided some insights into how to choose a good thesis title.

My advice for developing a thesis title

  • Draft the title early. Drafting it early can help give clarity for the overall message of your research. For instance, while you’re assembling the rest of your thesis you can check that the title encompasses the research chapters you’re included, and likewise that the research experiments you’re including fall within what the title describes. Drafting it early also gives more time you to think it over. As with everything: having a first draft is really important to iterate on.
  • Look at some example titles . Such as those featured above!
  • If you’re not sure about your title, ask a few other people what they think . But remember that you have the final say!

I hope this post has been useful for those of you are finalising your thesis and need to decide on a thesis title. If you’ve enjoyed this article and would like to hear about future content (and gain access to my free resource library!) you can subscribe for free here:

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)

Related Posts

Image with a title showing 'How to make PhD thesis corrections' with a cartoon image of a man writing on a piece of paper, while holding a test tube, with a stack of books on the desk beside him

Minor Corrections: How To Make Them and Succeed With Your PhD Thesis

2nd June 2024 2nd June 2024

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Privacy Overview

  • Honors Theses - Examples

1.   A Carne e a Navalha :  Self-Reflective Representation of Marginalized Characters in Brazilian Narrative by Clarice Lispector, Eduardo Coutinho, and Racionias MCs by Corina Ahlswede, 2018

2.   The Travel of Clear Waters: A Case Study on the Afterlife of a Poem by Kaiyu Xu, 2019

3.   Examining Blurring: An Anti-anthropocentric Comparative Study of European Vampirism and Shuten Dōji by Yisheng Tang, 2018

4.  The Revolutionary Potential of Mythology  by Zachary Morgan, 2017

5.  “Use your authority!”: Pedagogy in William Shakespeare’s The Tempest by Wesley Boyko, 2018

6.  Train of Thought by Yana Zlochistaya, 2017

7.   “Between here and there”:  Assertion of the Poetic Voice in the Poetry of Rita Bouvier and Marilyn Dumont by Molly Kearnan, 2020

8.  Unveiling the Invaluable:  Female Voices, Affective Labor, and Play in Reḵẖtī Poetry by Elizabeth Gobbo, 2020

9.  The Prospect Garden of Forking Paths: Reading Jorge Luis Borges’s Fiction through Cao Xueqin’s Honglou meng and Buddhism by Jenny Chen, 2023

10.  La Politisation du Féminisme Littéraire et de la Différence Sexuelle chez Woolf et Cixous by Samantha Bonadio, 2023

11.  AENEAS’ EMPIRE AND CÉSAIRE’S EVASION: BLACK POETICS AS REFUSAL AND REDACTION IN CAHIER D’UN RETOUR AU PAYS NATAL   by des jackson, 2023

  • Undergraduate Program
  • Program Requirements
  • Study Abroad
  • Transfer Students
  • Comp Lit Undergraduate Journal
  • Annual Research Symposium
  • Why I Majored in Comp Lit
  • Undergrad Peer Representatives
  • Commencement

Scholar Commons

Home > USC Columbia > Arts and Sciences > Comparative Literature > Comparative Literature Theses and Dissertations

Comparative Literature Theses and Dissertations

Theses/dissertations from 2024 2024.

Emerson and Nietzsche: Appropriation, Translation, and Experimentation , Maximilian Gindorf

Theses/Dissertations from 2023 2023

Constructing Selfhood Through Fantasy: Mirror Women and Dreamscape Conversations in Olga Grushin’s Forty Rooms , Grace Marie Alger

Eugene O’Neill Returns: Theatrical Modernization and O’Neill Adaptations in 1980s China , Shuying Chen

The Supernatural in Migration: A Reflection on Senegalese Literature and Film , Rokhaya Aballa Dieng

Breaking Down the Human: Disintegration in Nineteenth-Century Fiction , Benjamin Mark Driscol

Archetypes Revisited: Investigating the Power of Universals in Soviet and Hollywood Cinema , Iana Guselnikova

Planting Rhizomes: Roots and Rhizomes in Maryse Condé’s Traversée de la Mangrove and Calixthe Beyala’s Le Petit Prince de Belleville , Rume Kpadamrophe

Violence, Rebellion, and Compromise in Chinese Campus Cinema ----- The Comparison of Cry Me a Sad River and Better Days , Chunyu Liu

Tracing Modern and Contemporary Sino-French Literary and Intellectual Relations: China, France, and Their Shifting Peripheries , Paul Timothy McElhinny

Truth and Knowledge in a Literary Text and Beyond: Lydia Chukovskaya’s Sofia Petrovna at the Intersections Between Selves, Culture, and Paratext , Angelina Rubina

From Roland to Gawain, or the Origin of Personified Knights , Clyde Tilson

Theses/Dissertations from 2022 2022

Afro-Diasporic Literatures of the United States and Brazil: Imaginaries, Counter-Narratives, and Black Feminism in the Americas , David E. S. Beek

The Pursuit of Good Food: The Alimentary Chronotope in Madame Bovary , Lauren Flinner

Form and Voice: Representing Contemporary Women’s Subaltern Experience in and Beyond China , Tingting Hu

Geography of a “Foreign” China: British Intellectuals’ Encounter With Chinese Spaces, 1920-1945 , Yuzhu Sun

Truth and Identity in Dostoevsky’s Raskolnikov and Prince Myshkin , Gwendolyn Walker

Theses/Dissertations from 2021 2021

Postcolonial Narrative and The Dialogic Imaginatio n: An Analysis of Early Francophone West African Fiction and Cinema , Seydina Mouhamed Diouf

The Rising of the Avant-Garde Movement In the 1980s People’s Republic of China: A Cultural Practice of the New Enlightenment , Jingsheng Zhang

Theses/Dissertations from 2020 2020

L’ Entre- Monde : The Cinema of Alain Gomis , Guillaume Coly

Digesting Gender: Gendered Foodways in Modern Chinese Literature, 1890s–1940s , Zhuo Feng

The Deconstruction of Patriarchal War Narratives in Svetlana Alexievich’s The Unwomanly Face of War , Liubov Kartashova

Pushing the Limits of Black Atlantic and Hispanic Transatlantic Studies Through the Exploration of Three U.S. Afro-Latio Memoirs , Julia Luján

Taiwanese Postcolonial Identities and Environmentalism in Wu Ming-Yi’s the Stolen Bicycle , Chihchi Sunny Tsai

Games and Play of Dream of the Red Chamber , Jiayao Wang

Theses/Dissertations from 2019 2019

Convertirse en Inmortal, 成仙 ChéngxiāN, Becoming Xian: Memory and Subjectivity in Cristina Rivera Garza’s Verde Shanghai , Katherine Paulette Elizabeth Crouch

Between Holy Russia and a Monkey: Darwin's Russian Literary and Philosophical Critics , Brendan G. Mooney

Emerging Populations: An Analysis of Twenty-First Century Caribbean Short Stories , Jeremy Patterson

Time, Space and Nonexistence in Joseph Brodsky's Poetry , Daria Smirnova

Theses/Dissertations from 2018 2018

Through the Spaceship’s Window: A Bio-political Reading of 20th Century Latin American and Anglo-Saxon Science Fiction , Juan David Cruz

The Representations of Gender and Sexuality in Contemporary Arab Women’s Literature: Elements of Subversion and Resignification. , Rima Sadek

Insects As Metaphors For Post-Civil War Reconstruction Of The Civic Body In Augustan Age Rome , Olivia Semler

Theses/Dissertations from 2017 2017

Flannery O’Connor’s Art And The French Renouveau Catholique: A Comparative Exploration Of Contextual Resources For The Author’s Theological Aesthetics Of Sin and Grace , Stephen Allen Baarendse

The Quixotic Picaresque: Tricksters, Modernity, and Otherness in the Transatlantic Novel, or the Intertextual Rhizome of Lazarillo, Don Quijote, Huck Finn, and The Reivers , David Elijah Sinsabaugh Beek

Piglia and Russia: Russian Influences in Ricardo Piglia’s Nombre Falso , Carol E. Fruit Diouf

Beyond Life And Death Images Of Exceptional Women And Chinese Modernity , Wei Hu

Archival Resistance: A Comparative Reading of Ulysses and One Hundred Years of Solitude , Maria-Josee Mendez

Narrating the (Im)Migrant Experience: 21st Century African Fiction in the Age of Globalization , Bernard Ayo Oniwe

Narrating Pain and Freedom: Place and Identity in Modern Syrian Poetry (1970s-1990s) , Manar Shabouk

Theses/Dissertations from 2016 2016

The Development of ‘Meaning’ in Literary Theory: A Comparative Critical Study , Mahmoud Mohamed Ali Ahmad Elkordy

Familial Betrayal And Trauma In Select Plays Of Shakespeare, Racine, And The Corneilles , Lynn Kramer

Evil Men Have No Songs: The Terrorist and Literatuer Boris Savinkov, 1879-1925 , Irina Vasilyeva Meier

Theses/Dissertations from 2015 2015

Resurrectio Mortuorum: Plato’s Use of Ἀνάγκη in the Dialogues , Joshua B. Gehling

Two Million "Butterflies" Searching for Home: Identity and Images of Korean Chinese in Ho Yon-Sun's Yanbian Narratives , Xiang Jin

The Trialectics Of Transnational Migrant Women’s Literature In The Writing Of Edwidge Danticat And Julia Alvarez , Jennifer Lynn Karash-Eastman

Unacknowledged Victims: Love between Women in the Narrative of the Holocaust. An Analysis of Memoirs, Novels, Film and Public Memorials , Isabel Meusen

Making the Irrational Rational: Nietzsche and the Problem of Knowledge in Mikhail Bulgakov's The Master and Margarita , Brendan Mooney

Invective Drag: Talking Dirty in Catullus, Cicero, Horace, and Ovid , Casey Catherine Moore

Destination Hong Kong: Negotiating Locality in Hong Kong Novels 1945-1966 , Xianmin Shen

H.P. Lovecraft & The French Connection: Translation, Pulps and Literary History , Todd David Spaulding

Female Representations in Contemporary Postmodern War Novels of Spain and the United States: Women as Tools of Modern Catharsis in the Works of Javier Cercas and Tim O'Brien , Joseph P. Weil

Theses/Dissertations from 2014 2014

Poetic Appropriations in Vergil’s Aeneid: A Study in Three Themes Comprising Aeneas’ Character Development , Edgar Gordyn

Ekphrasis and Skepticism in Three Works of Shakespeare , Robert P. Irons

Theses/Dissertations from 2013 2013

The Role of the Trickster Figure and Four Afro-Caribbean Meta-Tropes In the Realization of Agency by Three Slave Protagonists , David Sebastian Cross

Putting Place Back Into Displacement: Reevaluating Diaspora In the Contemporary Literature of Migration , Christiane Brigitte Steckenbiller

Using Singular Value Decomposition in Classics: Seeking Correlations in Horace, Juvenal and Persius against the Fragments of Lucilius , Thomas Whidden

Theses/Dissertations from 2012 2012

Decolonizing Transnational Subaltern Women: The Case of Kurasoleñas and New York Dominicanas , Florencia Cornet

Representation of Women In 19Th Century Popular Art and Literature: Forget Me Not and La Revista Moderna , Juan David Cruz

53x+m³=Ø? (Sex+Me=No Result?): Tropes of Asexuality in Literature and Film , Jana -. Fedtke

Argentina in The African Diaspora: Afro-Argentine And African American Cultural Production, Race, And Nation Building in the 19th Century , Julia Lujan

Male Subjectivity and Twenty-First Century German Cinema: Gender, National Idenity, and the Problem of Normalization , Richard Sell

Theses/Dissertations from 2011 2011

Blue Poets: Brilliant Poetry , Evangelin Grace Chapman-Wall

Sickness of the Spirit: A Comparative Study of Lu Xun and James Joyce , Liang Meng

Dryden and the Solution to Domination: Bonds of Love In the Conquest of Granada , Lydia FitzSimons Robins

Theses/Dissertations from 2010 2010

The Family As the New Collectivity of Belonging In the Fiction of Bharati Mukherjee and Jhumpa Lahiri , Sarbani Bose

Lyric Transcendence: the Sacred and the Real In Classical and Early-Modern Lyric. , Larry Grant Hamby

Abd al-Rahman Al-Kawakibi's Tabai` al-Istibdad wa Masari` al-Isti`bad (The Characteristics of Despotism and The Demises of Enslavement): A Translation and Introduction , Mohamad Subhi Hindi

Re-Visions: Nazi Germany and Fascist Italy In German and Italian Film and Literature , Kristina Stefanic Brown

Plato In Modern China: A Study of Contemporary Chinese Platonists , Leihua Weng

Making Victims: History, Memory, and Literature In Japan's Post-War Social Imaginary , Kimberly Wickham

Theses/Dissertations from 2009 2009

The Mirrored Body: Doubling and Replacement of the Feminine and androgynous Body In Hadia Said'S Artist and Haruki Murakami'S Sputnik Sweetheart , Fatmah Alsalamean

Making Monsters: The Monstrous-Feminine In Horace and Catullus , Casey Catherine Moore

Not Quite American, Not Quite European: Performing "Other" Claims to Exceptionality In Francoist Spain and the Jim Crow South , Brittany Powell

Developing Latin American Feminist Theory: Strategies of Resistance In the Novels of Luisa Valenzuela and Sandra Cisneros , Jennifer Lyn Slobodian

Advanced Search

  • Notify me via email or RSS
  • Collections
  • Disciplines

Submissions

  • Give us Feedback
  • University Libraries

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • College University and Postgraduate
  • Academic Writing

How to Write a Title for a Compare and Contrast Essay

Last Updated: August 10, 2021 Fact Checked

This article was co-authored by Emily Listmann, MA . Emily Listmann is a Private Tutor and Life Coach in Santa Cruz, California. In 2018, she founded Mindful & Well, a natural healing and wellness coaching service. She has worked as a Social Studies Teacher, Curriculum Coordinator, and an SAT Prep Teacher. She received her MA in Education from the Stanford Graduate School of Education in 2014. Emily also received her Wellness Coach Certificate from Cornell University and completed the Mindfulness Training by Mindful Schools. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 115,047 times.

The title is an important part of any essay. After all, it’s the first thing people read. When you write a title for your compare and contrast essay, it needs to let your reader know what subjects you want to compare and how you plan to compare them. Some essays need more formal, informative titles while others benefit from creative titles. No matter what, just remember to keep your title short, readable, and relevant to your writing.

Creating an Informative Title

Step 1 Establish your audience.

  • Informative titles like “The Benefit of Owning a Cat vs. a Dog”, for example, would be better for a classroom setting, while a creative title like “My Dog is Better than a Cat” would be better for a blog. [2] X Research source

Step 2 List what you want to compare.

  • You only need to include the broad topics or themes you want to compare, such as dogs and cats. Don’t worry about putting individual points in your title. Those points will be addressed in the body of your essay.
  • You may be comparing something to itself over time or space, like rock music in the 20th and 21st centuries, or Renaissance art in Italy and the Netherlands. If that’s the case, list the subject you want to compare, and places or timeframes that you are using for your comparison.

Step 3 Decide if your essay is meant to be persuasive or not.

  • Persuasive essay titles might use words like “benefit,” “better,” “advantages,” “should,” “will,” and other words that convey a sense that one subject has an advantage over the other.
  • Informative titles might use words like “versus,” “compared,” or “difference”. These words don’t suggest that one subject is better or worse, they simply point out they are not the same.

Step 4 Write your informative title.

  • The end result should be a title that lets readers know what you want to compare and contrast, and how you plan on doing so in just a few words. If for example, you're comparing rock music across time, your title might be The Difference in Chord Progressions of 20th and 21st-century Rock Music .

[4] X Research source

Generating a Creative Title

Step 1 Establish your purpose.

  • If, for example, you just want to compare white and milk chocolate, you are providing facts. Your goal will not be to make your audience think one particular chocolate is better. Your title, then, may be something like "Loco for Cocoa: The Differences Between Types of Chocolate."
  • If, however, you want to tell your audience why milk chocolate is better, you are reinforcing a popular idea. If you want to explain why white chocolate is better, you are going against a popular idea. In that case, a better title might be "Milking it - Why White Chocolate is Totally the Best Chocolate."

Step 2 Avoid direct comparison words.

  • ”Do Hash Browns Stack up Against Fries as a Burger Side” creates a sense of tension between your subjects and challenges a popular opinion. It is a more engaging title for your readers than “Comparing Hash Browns and Fries as Burger Sides.”

Step 3 Use a colon.

  • For example, if you want to write an essay comparing two works of art by Van Gogh, you may use a title like, “Look at Him Gogh: Comparing Floral Composition in Almond Blossoms and Poppy Flowers.”

Keeping Your Title Relevant and Readable

Step 1 Write the paper first.

  • Your essay is where you will make your arguments. Your title just needs to convey your subjects and establish that you plan to compare and contrast them in some way.

Step 3 Ask a friend for their opinion.

Expert Q&A

  • If you're struggling to figure out a title, try writing your thesis at the top of a blank page, then brainstorming all the titles you can think of below. Go through slowly to see which ones fit your paper the best and which you like the most. Thanks Helpful 0 Not Helpful 1

example of comparative study thesis title

You Might Also Like

Write a Reflection Paper

  • ↑ https://www.kibin.com/essay-writing-blog/how-to-write-good-essay-titles/
  • ↑ http://www.schooleydesigns.com/compare-and-contrast-essay-title/
  • ↑ http://www.editage.com/insights/3-basic-tips-on-writing-a-good-research-paper-title
  • ↑ http://canuwrite.com/article_titles.php
  • ↑ http://writing.umn.edu/sws/assets/pdf/quicktips/titles.pdf
  • ↑ http://www.aacstudents.org/tips-for-essay-writing-asking-friends-to-help-you-out.php

About This Article

Emily Listmann, MA

  • Send fan mail to authors

Reader Success Stories

Jason Y.

Dec 4, 2019

Did this article help you?

Jason Y.

Jan 7, 2022

Jusi Tusilene

Jusi Tusilene

Jul 17, 2021

Blaine

Mar 23, 2022

Do I Have a Dirty Mind Quiz

Featured Articles

Ask Someone to Hang Out

Trending Articles

The Meaning Behind Sabrina Carpenter’s Hit "Espresso"

Watch Articles

Make Dijon Mustard

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

wikiHow Tech Help Pro:

Develop the tech skills you need for work and life

example of comparative study thesis title

How to Write a Comparative Analysis Dissertation: Useful Guidelines

example of comparative study thesis title

Writing a dissertation involves more than just demonstrating your expertise in your chosen field of study. It also requires using important skills, such as analytical thinking. Without it, developing sound theories, introducing arguments, or making conclusions would be impossible. And nowhere is this ability more prominently showcased than in writing comparative analysis dissertations.

Comparative analysis is a helpful method you can use to do research. Remember writing compare-and-contrast essays at school? It’s actually very similar to conducting this type of analysis. But it also has plenty of peculiarities that make it a unique approach. Keep reading to learn more about it!

What Is a Comparative Analysis Dissertation?

Comparative analysis types.

  • Possible Difficulties
  • Elements of Comparative Analysis
  • How to Write a Comparative Analysis Dissertation

Comparative analysis boils down to studying similarities and differences between two or more things, be it theories, texts, processes, personalities, or time periods. This method is especially useful in conducting social sciences , humanities, history, and business research.

Conducting a comparative analysis helps you achieve multiple goals:

  • It allows you to find parallels and dissimilaritie s between your subjects and use them to make broader conclusions.
  • Putting two or more things against each other also helps to see them in a new light and notice the usually neglected aspects.
  • In addition to similarities and differences, conducting a comparative analysis helps to determine causality —that is, the reason why these characteristics exist in the first place.

Depending on your research methods, your comparative analysis dissertation can be of two types:

  • Qualitative comparative analysis revolves around individual examples. It uses words and concepts to describe the subjects of comparison and make conclusions from them. Essentially, it’s about studying a few examples closely to understand their specific details. This method will be especially helpful if you’re writing a comparative case study dissertation.
  • Quantitative comparative dissertations will use numbers and statistics to explain things. It helps make general statements about big sample groups. You will usually need a lot of examples to gather plenty of reliable numerical data for this kind of research.

There are no strict rules regarding these types. You can use the features of both in your comparative dissertation if you want to.

Possible Difficulties of Writing a Comparative Analysis Dissertation

As you can see, comparison is an excellent research method that can be a great help in dissertation writing . But it also has its drawbacks and challenges. It’s essential to be aware of them and do your best to overcome them:

  • Your chosen subjects of comparison may have very little in common . In that case, it might be tricky to come up with at least some similarities.
  • Sometimes, there may not be enough information about the things you want to study. This will seriously limit your choices and may affect the accuracy of your research results. To avoid it, we recommend you choose subjects you’re already familiar with.
  • Choosing a small number of cases or samples will make it much more challenging to generalize your findings . It may also cause you to overlook subtle ways in which the subjects influence each other. That’s why it’s best to choose a moderate number of items from which to draw comparisons, usually between 5 and 40.
  • It’s also essential that your dissertation looks different from a s high school compare-and-contrast essay. Instead, your work should be appropriately structured. Read on to learn how to do it!

Elements of Dissertation Comparative Analysis

Do you want your dissertation comparative analysis to be successful? Then make sure it has the following key elements:

  • Context Your comparative dissertation doesn’t exist in a vacuum. It has historical and theoretical contexts as well as previous research surrounding it. You can cover these aspects in your introduction and literature review .
  • Goals It should be clear to the reader why you want to compare two particular things. That’s why, before you start making your dissertation comparative analysis, you’ll need to explain your goal. For example, the goal of a dissertation in human science can be to describe and classify something.
  • Modes of Comparison This refers to the way you want to conduct your research. There are four modes of comparison to choose from: similarity-focused, difference-focused, genus-species relationship, and refocusing.
Such studies focus on what’s similar and pay little attention to differences.
This type of research uses the opposite approach, highlighting differences.
studies examine how various subjects (“species”) relate to a broader category (“genus”) to which they belong.
Refocusing allows you to better understand one thing by looking at it through the lens of another.
  • Scale This is the degree to which your study will be zooming on the subjects of comparison. It’s similar to looking at maps. There are maps of the entire world, of separate countries, and of smaller locations. The scale of your research refers to how detailed the map is. You will need to use similar scale maps for each subject to conduct a good comparison.
  • Scope This criterion refers to how far removed your subjects are in time and space. Depending on the scope, there are two types of comparisons:
Contextual comparison refers to studying things from the same time and place, for example, two European countries from the medieval period.
comparisons revolve around things from different time periods or places, such as ancient Greek and Chinese religions. This type isn’t necessarily about completely separate things. It just means that they’re not immediately related.
  • Research Question This is the key inquiry that guides your entire study. In a comparative analysis thesis, the research question usually addresses similarities and differences, but it can also focus on other patterns you’ll be exploring. It can belong to one of the following types, depending on the kind of analysis you want to apply:
Your research question can present your findings by describing how things are different or similar.
Alternatively, it can explain how some aspects in one group influence another group.
A question of the third type shows how two or more things are related in different contexts. Essentially, it questions whether the same relationship holds true in various cases.
A comparative explanatory question asks why relationships are different in different groups.

Want to write your research question quickly and easily? Try our thesis statement generator ! It has four modes depending on your type of writing, which helps it produce customized results.

  • Data Analysis Here, you analyze similarities, differences, and relationships you’ve identified between the subjects. Make sure to provide your argumentation and explain where your findings come from.
  • Conclusions This element addresses the research question and answers it. It can also point out the significance of similarities and differences that you’ve found.

How to Write a Comparative Analysis Dissertation

Now that you know what your comparative analysis should include, it’s time to learn how to write it! Follow the steps, and you’ll be sure to succeed:

  • Select the Subjects This is the most critical step on which your entire dissertation will depend. To choose things to compare, try to analyze several important factors, including your potential audience , the overall goal of the study, and your interests. It’s also essential to check whether the things you want to discuss are sufficiently studied. While you research possible topics, you may stumble upon untrustworthy AI-generated content. Unfortunately, it’s getting increasingly difficult to differentiate it from human-made writing. To avoid getting into this trap, consider using an AI detection tool . It provides 100% accurate results and is completely free.
  • Describe Your Chosen Items Before you can start comparing the subjects, it’s necessary to describe them in their social and historical contexts. Without taking a long, hard look at your topic’s background, it would be impossible to determine what you should pay attention to during your research. To describe your subjects properly, you will need to study plenty of sources and convey their content in your dissertation. Want to simplify this task? Check out our excellent free summarizer tool !
  • Juxtapose Now, it’s time to do the comparison by checking how similar and different your subjects are. Some may think focusing on the resemblances is more critical, while others find contrasts more exciting. Both these viewpoints are valid, but the best approach is to find the right balance depending on your study’s goal.
  • Provide Redescription Unlike previous steps, this one is optional. It involves looking at something for the second time after conducting the comparison. The point is that you might learn new things about your subjects during your study. They may even help shed light on each other (it’s called “ reciprocal illumination .”) This knowledge will likely deepen your understanding or even change it altogether. It’s a good idea to point it out in your comparative case study dissertation.
  • Consider Rectification and Theory Formation These two processes are also optional. They involve upgrading your writing and theories after conducting your research. This doesn’t mean changing the topic of you study. Instead, it refers to changing how you think about your subjects. For example, you may gain some new understanding and realize that you weren’t using the right words to properly describe your subjects. That’s when rectification comes into play. Essentially, you revise the language used in your discussion to make it more precise and appropriate. This new perspective may even inspire you to come up with a new theory about your topic! In that case, you may write about it, too. Usually, though, rectification is enough. If you decide to do it, feel free to use our paraphrasing tool to help you find the right words.
  • Edit and Proofread After you’re done writing the bulk of your text, it’s essential to check it and ensure it passes the plagiarism check. After all, even if you haven’t directly copied other people’s texts, there may still be some percentage of accidental plagiarism that can get you in trouble. To ensure that it doesn’t happen, use our free plagiarism detector .

And this is how you write a comparative analysis dissertation! We hope our tips will be helpful to you. Read our next article if you need help with a  literature review in a dissertation . Good luck with your studies!

Related Posts

Dissertation en francais: a mere dissertation in french.

example of comparative study thesis title

Thesis Committee Members: Choosing the Right People

A thesis acknowledgement page: i want to thank…, a dissertation cartoon or a dissertation on cartoons, market penetration strategy dissertations: free writing strategies for you.

  • College Essay
  • Argumentative Essay
  • Expository Essay
  • Narrative Essay
  • Descriptive Essay
  • Scholarship Essay
  • Admission Essay
  • Reflective Essay
  • Nursing Essay
  • Economics Essay

Assignments

  • Term Papers
  • Research Papers
  • Case Studies
  • Dissertation
  • Presentation
  • Editing Help
  • Cheap Essay Writing
  • How to Order

User Icon

Comparative Essay

Barbara P

How to Write a Comparative Essay – A Complete Guide

10 min read

Comparative Essay

People also read

Learn How to Write an Editorial on Any Topic

Best Tips on How to Avoid Plagiarism

How to Write a Movie Review - Guide & Examples

A Complete Guide on How to Write a Summary for Students

Write Opinion Essay Like a Pro: A Detailed Guide

Evaluation Essay - Definition, Examples, and Writing Tips

How to Write a Thematic Statement - Tips & Examples

How to Write a Bio - Quick Tips, Structure & Examples

How to Write a Synopsis – A Simple Format & Guide

Visual Analysis Essay - A Writing Guide with Format & Sample

List of Common Social Issues Around the World

Writing Character Analysis - Outline, Steps, and Examples

11 Common Types of Plagiarism Explained Through Examples

Article Review Writing: A Complete Step-by-Step Guide with Examples

A Detailed Guide on How to Write a Poem Step by Step

Detailed Guide on Appendix Writing: With Tips and Examples

Comparative essay is a common assignment for school and college students. Many students are not aware of the complexities of crafting a strong comparative essay. 

If you too are struggling with this, don't worry!

In this blog, you will get a complete writing guide for comparative essay writing. From structuring formats to creative topics, this guide has it all.

So, keep reading!

Arrow Down

  • 1. What is a Comparative Essay?
  • 2. Comparative Essay Structure
  • 3. How to Start a Comparative Essay?
  • 4. How to Write a Comparative Essay?
  • 5. Comparative Essay Examples
  • 6. Comparative Essay Topics
  • 7. Tips for Writing A Good Comparative Essay
  • 8. Transition Words For Comparative Essays

What is a Comparative Essay?

A comparative essay is a type of essay in which an essay writer compares at least two or more items. The author compares two subjects with the same relation in terms of similarities and differences depending on the assignment.

The main purpose of the comparative essay is to:

  • Highlight the similarities and differences in a systematic manner.
  • Provide great clarity of the subject to the readers.
  • Analyze two things and describe their advantages and drawbacks.

A comparative essay is also known as compare and contrast essay or a comparison essay. It analyzes two subjects by either comparing them, contrasting them, or both. The Venn diagram is the best tool for writing a paper about the comparison between two subjects.  

Moreover, a comparative analysis essay discusses the similarities and differences of themes, items, events, views, places, concepts, etc. For example, you can compare two different novels (e.g., The Adventures of Huckleberry Finn and The Red Badge of Courage).

However, a comparative essay is not limited to specific topics. It covers almost every topic or subject with some relation.

Comparative Essay Structure

A good comparative essay is based on how well you structure your essay. It helps the reader to understand your essay better. 

The structure is more important than what you write. This is because it is necessary to organize your essay so that the reader can easily go through the comparisons made in an essay.

The following are the two main methods in which you can organize your comparative essay.

Point-by-Point Method 

The point-by-point or alternating method provides a detailed overview of the items that you are comparing. In this method, organize items in terms of similarities and differences.

This method makes the writing phase easy for the writer to handle two completely different essay subjects. It is highly recommended where some depth and detail are required.

Below given is the structure of the point-by-point method. 




Block Method 

The block method is the easiest as compared to the point-by-point method. In this method, you divide the information in terms of parameters. It means that the first paragraph compares the first subject and all their items, then the second one compares the second, and so on.

However, make sure that you write the subject in the same order. This method is best for lengthy essays and complicated subjects.

Here is the structure of the block method. 

Therefore, keep these methods in mind and choose the one according to the chosen subject.

Mixed Paragraphs Method

In this method, one paragraph explains one aspect of the subject. As a writer, you will handle one point at a time and one by one. This method is quite beneficial as it allows you to give equal weightage to each subject and help the readers identify the point of comparison easily.

How to Start a Comparative Essay?

Here, we have gathered some steps that you should follow to start a well-written comparative essay.  

Choose a Topic

The foremost step in writing a comparative essay is to choose a suitable topic.

Choose a topic or theme that is interesting to write about and appeals to the reader. 

An interesting essay topic motivates the reader to know about the subject. Also, try to avoid complicated topics for your comparative essay. 

Develop a List of Similarities and Differences 

Create a list of similarities and differences between two subjects that you want to include in the essay. Moreover, this list helps you decide the basis of your comparison by constructing your initial plan. 

Evaluate the list and establish your argument and thesis statement .

Establish the Basis for Comparison 

The basis for comparison is the ground for you to compare the subjects. In most cases, it is assigned to you, so check your assignment or prompt.

Furthermore, the main goal of the comparison essay is to inform the reader of something interesting. It means that your subject must be unique to make your argument interesting.  

Do the Research 

In this step, you have to gather information for your subject. If your comparative essay is about social issues, historical events, or science-related topics, you must do in-depth research.    

However, make sure that you gather data from credible sources and cite them properly in the essay.

Create an Outline

An essay outline serves as a roadmap for your essay, organizing key elements into a structured format.

With your topic, list of comparisons, basis for comparison, and research in hand, the next step is to create a comprehensive outline. 

Here is a standard comparative essay outline:


Subject A

Subject B

Analysis


Subject A

Subject B

Analysis


How to Write a Comparative Essay?

Now that you have the basic information organized in an outline, you can get started on the writing process. 

Here are the essential parts of a comparative essay: 

Comparative Essay Introduction 

Start off by grabbing your reader's attention in the introduction . Use something catchy, like a quote, question, or interesting fact about your subjects. 

Then, give a quick background so your reader knows what's going on. 

The most important part is your thesis statement, where you state the main argument , the basis for comparison, and why the comparison is significant.

This is what a typical thesis statement for a comparative essay looks like:

Comparative Essay Body Paragraphs 

The body paragraphs are where you really get into the details of your subjects. Each paragraph should focus on one thing you're comparing.

Start by talking about the first point of comparison. Then, go on to the next points. Make sure to talk about two to three differences to give a good picture.

After that, switch gears and talk about the things they have in common. Just like you discussed three differences, try to cover three similarities. 

This way, your essay stays balanced and fair. This approach helps your reader understand both the ways your subjects are different and the ways they are similar. Keep it simple and clear for a strong essay.

Comparative Essay Conclusion

In your conclusion , bring together the key insights from your analysis to create a strong and impactful closing.

Consider the broader context or implications of the subjects' differences and similarities. What do these insights reveal about the broader themes or ideas you're exploring?

Discuss the broader implications of these findings and restate your thesis. Avoid introducing new information and end with a thought-provoking statement that leaves a lasting impression.

Below is the detailed comparative essay template format for you to understand better.

Comparative Essay Format

Comparative Essay Examples

Have a look at these comparative essay examples pdf to get an idea of the perfect essay.

Comparative Essay on Summer and Winter

Comparative Essay on Books vs. Movies

Comparative Essay Sample

Comparative Essay Thesis Example

Comparative Essay on Football vs Cricket

Comparative Essay on Pet and Wild Animals

Comparative Essay Topics

Comparative essay topics are not very difficult or complex. Check this list of essay topics and pick the one that you want to write about.

  • How do education and employment compare?
  • Living in a big city or staying in a village.
  • The school principal or college dean.
  • New Year vs. Christmas celebration.
  • Dried Fruit vs. Fresh. Which is better?
  • Similarities between philosophy and religion.
  • British colonization and Spanish colonization.
  • Nuclear power for peace or war?
  • Bacteria or viruses.
  • Fast food vs. homemade food.

Tips for Writing A Good Comparative Essay

Writing a compelling comparative essay requires thoughtful consideration and strategic planning. Here are some valuable tips to enhance the quality of your comparative essay:

  • Clearly define what you're comparing, like themes or characters.
  • Plan your essay structure using methods like point-by-point or block paragraphs.
  • Craft an introduction that introduces subjects and states your purpose.
  • Ensure an equal discussion of both similarities and differences.
  • Use linking words for seamless transitions between paragraphs.
  • Gather credible information for depth and authenticity.
  • Use clear and simple language, avoiding unnecessary jargon.
  • Dedicate each paragraph to a specific point of comparison.
  • Summarize key points, restate the thesis, and emphasize significance.
  • Thoroughly check for clarity, coherence, and correct any errors.

Transition Words For Comparative Essays

Transition words are crucial for guiding your reader through the comparative analysis. They help establish connections between ideas and ensure a smooth flow in your essay. 

Here are some transition words and phrases to improve the flow of your comparative essay:

Transition Words for Similarities

  • Correspondingly
  • In the same vein
  • In like manner
  • In a similar fashion
  • In tandem with

Transition Words for Differences

  • On the contrary
  • In contrast
  • Nevertheless
  • In spite of
  • Notwithstanding
  • On the flip side
  • In contradistinction

Check out this blog listing more transition words that you can use to enhance your essay’s coherence!

In conclusion, now that you have the important steps and helpful tips to write a good comparative essay, you can start working on your own essay. 

However, if you find it tough to begin, all you have to do is say ' just do my essay ' and we'll get started.

Our skilled writers can handle any type of essay or assignment you need. So, don't wait—place your order now and make your academic journey easier!

Frequently Asked Question

How long is a comparative essay.

FAQ Icon

A comparative essay is 4-5 pages long, but it depends on your chosen idea and topic.

How do you end a comparative essay?

Here are some tips that will help you to end the comparative essay.

  • Restate the thesis statement
  • Wrap up the entire essay
  • Highlight the main points

AI Essay Bot

Write Essay Within 60 Seconds!

Barbara P

Dr. Barbara is a highly experienced writer and author who holds a Ph.D. degree in public health from an Ivy League school. She has worked in the medical field for many years, conducting extensive research on various health topics. Her writing has been featured in several top-tier publications.

Struggling With Your Paper?

Get a custom paper written at

With a FREE Turnitin report, and a 100% money-back guarantee

LIMITED TIME ONLY!

Keep reading

How to Write an Editorial

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

  • > The Research Imagination
  • > COMPARATIVE RESEARCH METHODS

example of comparative study thesis title

Book contents

  • Frontmatter
  • 1 RESEARCH PROCESS
  • 2 THEORY AND METHOD
  • 3 RESEARCH DESIGN
  • 4 MEASUREMENT
  • 5 ETHICAL AND POLITICAL ISSUES
  • 7 SURVEY RESEARCH
  • 8 INTENSIVE INTERVIEWING
  • 9 OBSERVATIONAL FIELD RESEARCH
  • 10 FEMINIST METHODS
  • 11 HISTORICAL ANALYSIS
  • 12 EXPERIMENTAL RESEARCH
  • 13 CONTENT ANALYSIS
  • 14 AGGREGATE DATA ANALYSIS
  • 15 COMPARATIVE RESEARCH METHODS
  • 16 EVALUATION RESEARCH
  • 17 INDEXES AND SCALES
  • 18 BASIC STATISTICAL ANALYSIS
  • 19 MULTIVARIATE ANALYSIS AND STATISTICAL SIGNIFICANCE
  • EPILOGUE: THE VALUE AND LIMITS OF SOCIAL SCIENCE KNOWLEDGE
  • Appendix A A Precoded Questionnaire
  • Appendix B Excerpt from a Codebook
  • Author Index
  • Subject Index

15 - COMPARATIVE RESEARCH METHODS

Published online by Cambridge University Press:  05 June 2012

INTRODUCTION

In contrast to the chapters on survey research, experimentation, or content analysis that described a distinct set of skills, in this chapter, a variety of comparative research techniques are discussed. What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the research imagination in the choice of data collection strategies. There is a wide divide between quantitative and qualitative approaches in comparative work. Most studies are either exclusively qualitative (e.g., individual case studies of a small number of countries) or exclusively quantitative, most often using many cases and a cross-national focus (Ragin, 1991:7). Ideally, increasing numbers of studies in the future will use both traditions, as the skills, tools, and quality of data in comparative research continue to improve.

In almost all social research, we look at how social processes vary and are experienced in different settings to develop our knowledge of the causes and effects of human behavior. This holds true if we are trying to explain the behavior of nations or individuals. So, it may then seem redundant to include a chapter in this book specifically dedicated to comparative research methods when all the other methods discussed are ultimately comparative.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • COMPARATIVE RESEARCH METHODS
  • Paul S. Gray , Boston College, Massachusetts , John B. Williamson , Boston College, Massachusetts , David A. Karp , Boston College, Massachusetts , John R. Dalphin
  • Book: The Research Imagination
  • Online publication: 05 June 2012
  • Chapter DOI: https://doi.org/10.1017/CBO9780511819391.016

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

  • Writing Home
  • Writing Advice Home

The Comparative Essay

  • Printable PDF Version
  • Fair-Use Policy

What is a comparative essay?

A comparative essay asks that you compare at least two (possibly more) items. These items will differ depending on the assignment. You might be asked to compare

  • positions on an issue (e.g., responses to midwifery in Canada and the United States)
  • theories (e.g., capitalism and communism)
  • figures (e.g., GDP in the United States and Britain)
  • texts (e.g., Shakespeare’s Hamlet and Macbeth )
  • events (e.g., the Great Depression and the global financial crisis of 2008–9)

Although the assignment may say “compare,” the assumption is that you will consider both the similarities and differences; in other words, you will compare and contrast.

Make sure you know the basis for comparison

The assignment sheet may say exactly what you need to compare, or it may ask you to come up with a basis for comparison yourself.

  • Provided by the essay question: The essay question may ask that you consider the figure of the gentleman in Charles Dickens’s Great Expectations and Anne Brontë’s The Tenant of Wildfell Hall . The basis for comparison will be the figure of the gentleman.
  • Developed by you: The question may simply ask that you compare the two novels. If so, you will need to develop a basis for comparison, that is, a theme, concern, or device common to both works from which you can draw similarities and differences.

Develop a list of similarities and differences

Once you know your basis for comparison, think critically about the similarities and differences between the items you are comparing, and compile a list of them.

For example, you might decide that in Great Expectations , being a true gentleman is not a matter of manners or position but morality, whereas in The Tenant of Wildfell Hall , being a true gentleman is not about luxury and self-indulgence but hard work and productivity.

The list you have generated is not yet your outline for the essay, but it should provide you with enough similarities and differences to construct an initial plan.

Develop a thesis based on the relative weight of similarities and differences

Once you have listed similarities and differences, decide whether the similarities on the whole outweigh the differences or vice versa. Create a thesis statement that reflects their relative weights. A more complex thesis will usually include both similarities and differences. Here are examples of the two main cases:

While Callaghan’s “All the Years of Her Life” and Mistry’s “Of White Hairs and Cricket” both follow the conventions of the coming-of-age narrative, Callaghan’s story adheres more closely to these conventions by allowing its central protagonist to mature. In Mistry’s story, by contrast, no real growth occurs.
Although Darwin and Lamarck came to different conclusions about whether acquired traits can be inherited, they shared the key distinction of recognizing that species evolve over time.

Come up with a structure for your essay

A Paragraph 1 in body new technology and the French Revolution
B Paragraph 2 in body new technology and the Russian Revolution
A Paragraph 3 in body military strategy and the French Revolution
B Paragraph 4 in body military strategy and the Russian Revolution
A Paragraph 5 in body administrative system and the French Revolution
B Paragraph 6 in body administrative system and the Russian Revolution

Note that the French and Russian revolutions (A and B) may be dissimilar rather than similar in the way they affected innovation in any of the three areas of technology, military strategy, and administration. To use the alternating method, you just need to have something noteworthy to say about both A and B in each area. Finally, you may certainly include more than three pairs of alternating points: allow the subject matter to determine the number of points you choose to develop in the body of your essay.

A Paragraphs 1–3 in body How the French Revolution encouraged or thwarted innovation
B Paragraphs 4–6 in body How the Russian Revolution encouraged or thwarted innovation

When do I use the block method? The block method is particularly useful in the following cases:

  • You are unable to find points about A and B that are closely related to each other.
  • Your ideas about B build upon or extend your ideas about A.
  • You are comparing three or more subjects as opposed to the traditional two.

Department of Comparative Literature

You are here, recent dissertations in comparative literature.

Dissertations in Comparative Literature have taken on vast number of topics and ranged across various languages, literatures, historical periods and theoretical perspectives. The department seeks to help each student craft a unique project and find the resources across the university to support and enrich her chosen field of study. The excellence of student dissertations has been recognized by several prizes, both within Yale and by the American Comparative Literature Association.

2012 – Present

Student Name Dissertation Title Year Advisors
Beretta, Francesca The Motionscape of Greek Tragedy: Greek Drama Through the Prism of Movement 2024

Marta Figlerowicz

Pauline LeVen

Lahiri, Ray The Violence of the Form: Violence and the Political in Greek and Latin Historical Narrative 2024

Moira Fradinger

Christina Kraus

Lee-Lenfield, Spencer This Beauty Born of Parting: Literary Translation Between Korean and English via the Korean Diaspora, 1920–Present 2024 Marta Figlerowicz
Pabon, Maru Agitated Layers of Air: Third-Worldism and the “Voice of the People” Across Palestine, Cuba and Algeria 2024 Robyn Creswell

Stern, Lindsay

Personhood: Literary Visions of a Legal Fiction

2023

Jesus Velasco

Rudiger Campe

Todorovic, Nebojsa Tragedies of Disintegration: Balkanizing Greco-Roman Antiquity 2023

Emily Greenwood Milne

Moira Fradinger

Abazon, Lital Speaking Sovereignty: The Plight of Multilingual Literature in Independent Israel, Morocco, and Algeria 2023

Hannan Hever

Jill Jarvis

Huang, Honglan Reading as Performance: Theatrical Books From Tristram Shandy to Artists’ Books for Children 2023 Katie Trumpener
Peng, Hsin-Yuan Cinematic Meteorology: Aesthetics and Epistemology of Weather Images 2023

Aaron Gerow

John Peters

Sidorenko, Ksenia Modernity’s Others: Marginality, Mass Culture, and the Early Comic Strip in the US 2023

Katie Trumpener

Marta Figlerowicz

Hamilton, Ted Imagining a Crisis: Human-Environmental Relations in North and South American Law and Literature 2022

Michael Warner

Moira Fradinger

Lee, Xavier Nonhistory: Slavery and the Black Historical Imagination 2022 Marta Figlerowicz
Suther, Jensen Spirit Disfigured: The Persistence of Freedom in the Modernist Novel 2022 Martin Hagglund
Baena, Victoria The Novel’s Lost Illusions: Time, Knowledge, and Narrative in the Provinces, 1800-1933 2021

Katie Trumpener

Maurice Samuels

Brunazzo, Alessandro Conjuring People: Pasolini’s Specters and the Global South 2021

Millicent Marcus

Dudley Andrew

Gubbins, Vanessa The Poem and Social Form: Making a People Out of a Poem in Peru and Germany 2021

Moira Fradinger

Paul North

Hirschfeld-Kroen, Leana Rise of the Modern Mediatrix: The Feminization of Media and Mediating Labor, 1865-1945 2021

Katie Trumpener

Charles Musser

Velez Valencia, Camila Craft and Storytelling: Romance and Reality in Joseph Conrad and Gabriel García Márquez 2021

Moira Fradinger

David Bromwich

Sheidaee, Iraj In Between Dār Al-Islām and the ‘Lands of the Christians’: Three Christian Arabic Travel Narratives From the Early Modern/Ottoman Period (Mid-17th-Early18th Centuries)  2021 Creswell, Robyn
Tolstoy, Andrey Where Do We Go When We Go Off-the-Grid? 2021

Francesco Casetti

Charles Musser

Fox, Catherine Christophe’s Ghost: The Making and Unmaking of Tragedy in Post-Revolutionary Haiti 2020

Marta Figlerowicz

Emily Greenwood

Piňos, Václav Haeckel’s Feral Embryo: Animality and Personal Formation in Western Origin Myths from Milton to Golding 2020

Rüdiger Campe

Marta Figlerowicz

Yovel, Noemi Confession and the German and American Novel: Intimate Talk, Violence and Last Confession 2019

Rüdiger Campe

Katie Trumpener

Mathew, Shaj

Wandering Comparisons: Global Genealogies of Flânerie and Modernity 2019

Marta Figlerowicz

Amy Hungerford

Tartici, Ayten

Adagios of Form 2019

Amy Hungerford

Carol Jacobs

Ruth Yeazell

Kivrak, Pelin Imperfect Cosmopolitans: Representations of Responsibility and Hospitality in Contemporary Middle Eastern Literatures, Film, and Art 2019

Katerina Clark

Martin Hägglund

Shpolberg, Masha Labor in Late Socialism: the Cinema of Polish Workers’ Unrest 1968-1981 2019

Katie Trumpener

Charles Musser

Powers, Julia Brazil’s Mystical Realists: Hilda Hilst, João Guimarães Rosa and Clarice Lispector in the 1960s 2018

David Quint

K. David Jackson

Eklund, Craig The Imagination in Proust, Joyce, and Beckett 2018 Martin Hägglund
Forsberg, Soren An Alien Point of View: Singular Experience and Literary Form 2018 Amy Hungerford;     Katie Trumpener
Weigel, Moira Animals, Media, and Modernity: Prehistories of the Posthuman 2017

Dudley Andrew;

Katie Trumpener

Carper, David Imagines historiarum: Renaissance Epic and the Development of Historical Thought  2017 David Quint
Fairfax, Daniel Politics, Aesthetics, Ontology: The Theoretical Legacy of Cahiers du cinema (1968-1973)  2017 Dudley Andrew
Li, Yukai Being late and being mistaken in the Homeric tradition 2017 Egbert Bakker;
Moira Fradinger
Nalencz, Leonard The Lives of Astyanax: Romance and Recovery in Ariosto, Spenser, and Milton 2017 David Quint
Chreiteh, Alexandra Fantastice Cohabitations: Magical Realism in Arabic and Hebrew and the Politics of Aesthetics 2016 Robyn Creswell
Harper, Elizabeth The Lost Children of Tragedy from Euripides to Racine 2016 David Quint
Piazza, Sarah Performing the Novel and Reading the Romantic Song: Popular Music and Metafiction in Tres tristes tigres, Sirena Selena vestida de pena, La importancia de llamarse Daniel Santos, Le cahier de romances, and Cien botellas en una pared  2016 David Quint;
Anibal González Pérez
Sinsky, Carolyn The Muse of Influence: Reading Russian Fiction in Britain, 1793 -1941  2016 Katie Trumpener
Sperling, Joshua Realism, Modernism and Commitment in the Work of John Berger: 1952-76  2016 Dudley Andrew
Younger, Neil D’apres le Roman: Cross-Channel Theatrical Adaptations from Richardson to Scott  2016 Thomas Kavanaugh;
Katie Trumpener
Bardi, Ariel Cleansing, Constructing, and Curating the State: India/Pakistan ‘47 and Israel/Palestine ‘48 2015 Hannan Hever
Kelbert, Eugenia Acquiring a Second Language Literature: Patterns in Translingual Writing from Modernism to the Moderns 2015

Vladimir Alexandrov;

Haun Saussy

Pfeifer, Annie To the Collector Belong the Spoils: The Transformation of Modernist Practices of Collecting 2015 Rüdiger Campe;Katie Trumpener
Roszak, Suzanne Triangular Diaspora and Social Resistance in the New American Literature 2015 Wai Chee Dimock;
Katie Trumpener
Dahlberg, Leif “Spacing Law and Politics: The constitution and representation of judicial places and juridicial spaces in law, literature and political philosophy in the works from Greek antiquity to the present” 2014 Carol Jacobs;
Haun Saussy
Weisberg, Margaret “Inventing the Desert and the Jungle: Creating identity through landscape in African and European culture” 2014 Christopher Miller;
Katie Trumpener
Wiedenfeld, Grant “Elastic Esthetics: A Comparative Media Approach to Modernist Literature and Cinema” 2014 Haun Saussy;
Francesco Casetti
Avrekh, Mikhail “Romantic Geographic and the (Re)invention of the Provinces in the Realist Novel” 2013

Katerina Clark

Maurice Samuels

Klemann, Heather “Developing Fictions: Childhood, Children’s Books, and the Novel” 2013 Jill Campbell;
Katie Trumpener
Mcmanus, Ann-Marie “Unfinished Awakenings: Afterlives of the Nahda and Postcolonialism in Arabic Literature 1894–2008” 2013 Haun Saussy;
Edwige Talbayev
Wolff, Spencer “The Darker Sides of Dignity: Freedom of Speech in the Wake of Authoritarian Collapse” 2013 Haun Saussy
Bloch, Elina “ ‘Unconfessed Confessions’: Strategies of (Not) Telling in Nineteenth-Century Narratives” 2012 Margaret Homans;
Katie Trumpener
Devecka, Martin “Athens, Rome, Tenochtitlan: A Historical Sociology of Ruins” 2012 Emily Greenwood
Gal, Noam “Fictional Inhumanities: Wartime Animals and Personification” 2012 Carol Jacobs;
Katie Trumpener
Jackson, Jeanne-Marie “Close to Home: Forms of Isolation in the Postcolonial Province” 2012 Katerina Clark;
Justin Neuman
Odnopozova, Dina “Russian-Argentine Literary Exchanges” 2012 Katerina Clark;
Moira Fradinger
Stevic, Aleksandar “Falling Short: Failure, Passivity, and the Crisis of Self-Fashioning in the European Novel, 1830–1927” 2012 Katie Trumpener;
Maurice Samuels
Student Name Dissertation Title Year Advisors
Cramer, Michael “Blackboard Cinema: Learning from the Pedagogical Art Film” 2011 Dudley Andrew;
John MacKay
Djagalov, Rossen “The People’s Republic of Letters: Twoards a Media History of Twentieth-Century Socialist Internationalism” 2011 Katerina Clark;
Michael Denning
Esposito, Stefan “The Pathological Revolution: Romanticism and Metaphors of Disease” 2011 Paul Fry;
Carol Jacobs
Feldman, Daniel “Unrepeatable: Fiction After Atrocity” 2011

Katie Trumpener

Benjamin Harshav

Jeong, Seung-hoon “Cinematic Interfaces: Retheorizing Apparatus, Image, Subjectivity” 2011 Thomas Elsaesser;
Dudley Andrew
Lienau, Annette “Comparative Literature in the Spirit of Bandung: Script Change, Language Choice, and Ideology in African and Asian Literatures (Senegal & Indonesia)” 2011 Christopher Miller
Coker, William “Romantic Exteriority: The Construction of Literature in Rousseau, Jean Paul, and P.B. Shelley” 2010 Cyrus Hamlin;
Paul Fry
Fan, Victor “Football Meets Opium: A Topological Study of Political Violence, Sovereignty, and Cinema Archaeology Between ‘England’ and ‘China’ ” 2010 Haun Saussy;
Dudley Andrew
Johnson, Rebecca “A History of the Novel in Translation: Cosmopolitan Tales in English and Arabic, 1729–1859” 2010 Katie Trumpener
Parfitt, Alexandra “Immoral Lessons: Education and Novel in Nineteenth-Century France” 2010 Peter Brooks;
Maurice Samuels
Xie, Wei “Female Cross-Dressing in Chinese Opera and Cinema” 2010 Dudley Andrew
Flynn, Catherine “Street Things: Transformations of Experience in the Modern City” 2009 Carol Jacobs;
Katie Trumpener
Lovejoy, Alice “The Army and the Avant-Garde: Art Cinema in the Czechoslovak Military, 1951–1971” 2009 Katie Trumpener
Rhoads, Bonita “Frontiers of Privacy: The Domestic Enterprise of Modern Fiction” 2009 Peter Brooks
Rubini, Rocco “Renaissance Humanism and Postmodernity: A Rhetorical History” 2009 David Quint;
Giuseppe Mazzotta
Chaudhuri, Pramit “Themoacy: Ethical Criticism and the Struggle for Authority in Epic and Tragedy” 2008 Susanna Braund;
David Quint
Lisi, Leonardo “Aesthetics of Dependency: Early Modernism and the Struggle against Idealism in Kierkegaard Ibsen, and Henry James” 2008 Paul Fry;
Pericles Lewis
Weiner, Allison “Refusals of Mastery: Ethical Encounters in Henry James and Maurice Blanchot” 2008 Wai Chee Dimock;
Carol Jacobs
Hafiz, Hiba “The Novel and the Ancien Régime: Britain, France, and the Rise of the Novel in the Seventeenth Century” 2007 Peter Brooks;
Katie Trumpener
Illibruck, Helmut “Figurations of Nostalgia: From the Pre-Enlightenment to Romanticism and Beyond” 2007 Paul Fry
Kern, Anne Marie “The Sacred Made Material: Instances of Game and Play in Interwar Europe” 2007 Dudley Andrew
Boes, Tobias “The Syncopated Self: Crises of Historical Experience in the Modernist ” 2006 Carol Jacobs;
Pericles Lewis
Boyer, Patricio “Empire and American Visions of the Humane” 2006 Rolena Adorno;
Roberto Gonález Echevarría
Chang, Eugene “Disaster and Hope: A Study of Walter Benjamin and Maurice Blanchot” 2006 Shoshana Felman
Mannheimer, Katherine “ ‘The Scope in Ev’ry Page’: Eighteenth-Century Satire as a Mode of Vision” 2006 Jill Campbell;
Katie Trumpener
Solovieva, Olga “A Discourse Apart: The Body of Christ and the Practice of Cultural Subversion” 2006 Haun Saussy
van den Berg, Christopher “The Social Aesthetics of Tacitus’ ” 2006 Susanna Braund;
David Quint
Anderson, Jerome B. “New World Romance and Authorship” 2005 Vera Kutzinski;
Roberto Gonález Echevarría
Enjuto Rangel, Cecilia “Cities in Ruins in Modern Poetry” 2005 Roberto Gonález Echevarría
Kliger, Ilya “Truth, Time and the Novel: Verdiction in Tolstoy, Dostoevsky and Balzac” 2005 Peter Brooks;
Michael Holquist
Kolb, Martina “Journeys of Desire: Liguria as Literary Landscape in Eugenio Montale, Ezra Pound, and Gottfried Benn” 2005 Harold Bloom;
Peter Brooks
Matz, Aaron “Satire in the Age of Realism, 1860–1910” 2005 Peter Brooks;
Ruth Bernard Yeazell
Student Name Dissertation Title Year Advisors
Barrenechea, Antonio “Telluric Monstrosity in the Americas: The Encyclopedic Taxonomies of Fuentes, Melville, and Pynchon” 2004 Roberto Gonález Echevarría;
Vera Kutzinski
Buchenau, Stefanie “The Art of Invention and the Invention of Art. Logic, Rhetoric, and Aesthetics in the Early German Enlightenment” 2004 A. Wood;
G. Raulet
Friedman, Daniel “Pedagogies of Resistance” 2004 Shoshana Felman
Raff, Sarah “Erotics of Instruction: Jane Austen and the Generalizing Novel” 2004 Peter Brooks
Steiner, Lina “The Poetics of Maturity: Autonomy and Aesthetic Education in Byron, Pushkin, and Stendhal” 2004 Peter Brooks;
Michael Holquist
Chesney, Duncan “Signs of Aristocracy in : Proust and the Salon from Mme de Remouillet to Mme de Guermantes” 2003 Peter Brooks;
Pericles Lewis
Farbman, Herschel “Dreaming, Writing, and Restlessness in Freud, Blanchot, Beckett, and Joyce” 2003 Paul Fry
Fradinger, Moira “Radical Evil: Literary Visions of Political Origins in Sophocles, Sade and Vargas Llosa” 2003 Roberto Gonález Echevarría;
Shoshana Felman
Gsoels-Lorensen, Jutta “Epitaphic Remembrance: Representing a Catastrophic Past in Second Generation Texts” 2003 Vilashini Cooppan;
Benjamin Harshav
Horsman, Yasco “Theatres of Justice: Judging, Staging, and Working Through in Arendt, Brecht and Delbo” 2003 Shoshana Felman
Katsaros, Laure “A Kaleidoscope in the Midst of the Crowds: Poetry and the City in Walt Whitman’s and Charles Baudelaire’s ” 2003 Shoshana Felman
Reichman, Ravit “Taking Care: Injury and Responsibility in Literature and Law” 2003 Peter Brooks;
Shoshana Felman
Sun, Emily “Literature and Impersonality: Keats, Flaubert, and the Crisis of the Author” 2003 Shoshana Felman;
Paul Fry
Katsaros, George “Tragedy, Catharsis, and Reason: An Essay on the Idea of the Tragic” 2002 Shoshana Felman
Mirabile, Michael “From Inscription to Performance: The Rhetoric of Self-Enclosure in the Modern Novel” 2002 Peter Brooks
Alphandary, Idit “The Subject of Autonomy and Fellowship in: Guy de Maupassant, D.W. Winnicott and Joseph Conrad” 2001 Peter Brooks
Bateman, Chimène “Addresses of Desire: Literary Innivation and the Female Destinataire in Medieval and Renaissance Literature” 2001 Edwin Duval
David Quint
Butler, Henry E. “Writing and Vampires in the Works of Lautréamont, Bram Stoker, Daniel Paul Schreber, and Fritz Lang” 2001 Michael Holquist;
David Quint
Duerfahrd, Lance “The Work of Poverty: the Minimum in Samuel Beckett and Alain Resnais” 2001 Shoshana Felman;
Susan Blood
Hunt, Philippe “Spectres du réel: Déliminations du Réalism Magique” 2001 Paolo Valesio
Liu, Haoming “Transformation of Childhood Experience: Rainer Maria Rilke and Fei Ming” 2001 Cyrus Hamlin
Peretz, Eyal “Literature and the Enigma of Power: A Reading of Moby-Dick” 2001 Shoshana Felman
Pickford, Henry “The Sense of Semblance: Modern German and Russian Literature after Adorno” 2001 Karsten Harries;
Winfried Menninghaus;
William M. Todd III
von Zastrow, Claus “The Ground of Our Beseeching: The Guiding Sense of Place in German and English Elegiac Poetry” 2001 Paul Fry;
Cyrus Hamlin;
Winfried Menninghaus
Wilson, Emily “Why Do I Overlive? Greek, Latin and English Tragic Survival” 2001 Victor Bers;
David Quint
Lintz, Edward M. “A Curie for Poetry? Nuclear Disintegration and Gertrude Stein’s Modernist Reception” 2000 Michael Holquist;
Tyrus Miller
Anderson, Matthew D. “Modernity and the Example of Poetry: Readings in Baudelaire, Verlaine and Ashbery” 1999 Geoffrey Hartman
Bernstein, Jonathan “Parataxis in Heraclitus, Höderlin, Mayakovsky” 1999 Benjamin Harshav;
Winfried Menninghaus
Pollard, Tanya L. “Dangerous Remedies: Poison and Theatre in the English Renaissance” 1999 David Quint
Freeland, Natalka “Trash fiction: The Victorian Novel and the Rise of Disposable Culture” 1998 Peter Brooks;
Ruth Bernard Yeazell
Hood, Carra “Reading the News: Activism, Authority, Audience” 1998 Hazel Carby
MacKay, John “Placing the Lyric: An Essay on Poetry and Community 1998 Geoffrey Hartman; Tomas Venclova
Schuller, Mortiz “ ‘Watching the Self’: The Mirror of Self-Knowledge in Ancient Literature” 1998 Heinrich von Staden;
Gordon Williams
Stark, Jared “Beyond Words: Suicide and Modern Narrative” 1998 Cathy Caruth;
Geoffrey Hartman

National Academies Press: OpenBook

On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations (2004)

Chapter: 5 comparative studies, 5 comparative studies.

It is deceptively simple to imagine that a curriculum’s effectiveness could be easily determined by a single well-designed study. Such a study would randomly assign students to two treatment groups, one using the experimental materials and the other using a widely established comparative program. The students would be taught the entire curriculum, and a test administered at the end of instruction would provide unequivocal results that would permit one to identify the more effective treatment.

The truth is that conducting definitive comparative studies is not simple, and many factors make such an approach difficult. Student placement and curricular choice are decisions that involve multiple groups of decision makers, accrue over time, and are subject to day-to-day conditions of instability, including student mobility, parent preference, teacher assignment, administrator and school board decisions, and the impact of standardized testing. This complex set of institutional policies, school contexts, and individual personalities makes comparative studies, even quasi-experimental approaches, challenging, and thus demands an honest and feasible assessment of what can be expected of evaluation studies (Usiskin, 1997; Kilpatrick, 2002; Schoenfeld, 2002; Shafer, in press).

Comparative evaluation study is an evolving methodology, and our purpose in conducting this review was to evaluate and learn from the efforts undertaken so far and advise on future efforts. We stipulated the use of comparative studies as follows:

A comparative study was defined as a study in which two (or more) curricular treatments were investigated over a substantial period of time (at least one semester, and more typically an entire school year) and a comparison of various curricular outcomes was examined using statistical tests. A statistical test was required to ensure the robustness of the results relative to the study’s design.

We read and reviewed a set of 95 comparative studies. In this report we describe that database, analyze its results, and draw conclusions about the quality of the evaluation database both as a whole and separated into evaluations supported by the National Science Foundation and commercially generated evaluations. In addition to describing and analyzing this database, we also provide advice to those who might wish to fund or conduct future comparative evaluations of mathematics curricular effectiveness. We have concluded that the process of conducting such evaluations is in its adolescence and could benefit from careful synthesis and advice in order to increase its rigor, feasibility, and credibility. In addition, we took an interdisciplinary approach to the task, noting that various committee members brought different expertise and priorities to the consideration of what constitutes the most essential qualities of rigorous and valid experimental or quasi-experimental design in evaluation. This interdisciplinary approach has led to some interesting observations and innovations in our methodology of evaluation study review.

This chapter is organized as follows:

Study counts disaggregated by program and program type.

Seven critical decision points and identification of at least minimally methodologically adequate studies.

Definition and illustration of each decision point.

A summary of results by student achievement in relation to program types (NSF-supported, University of Chicago School Mathematics Project (UCSMP), and commercially generated) in relation to their reported outcome measures.

A list of alternative hypotheses on effectiveness.

Filters based on the critical decision points.

An analysis of results by subpopulations.

An analysis of results by content strand.

An analysis of interactions among content, equity, and grade levels.

Discussion and summary statements.

In this report, we describe our methodology for review and synthesis so that others might scrutinize our approach and offer criticism on the basis of

our methodology and its connection to the results stated and conclusions drawn. In the spirit of scientific, fair, and open investigation, we welcome others to undertake similar or contrasting approaches and compare and discuss the results. Our work was limited by the short timeline set by the funding agencies resulting from the urgency of the task. Although we made multiple efforts to collect comparative studies, we apologize to any curriculum evaluators if comparative studies were unintentionally omitted from our database.

Of these 95 comparative studies, 65 were studies of NSF-supported curricula, 27 were studies of commercially generated materials, and 3 included two curricula each from one of these two categories. To avoid the problem of double coding, two studies, White et al. (1995) and Zahrt (2001), were coded within studies of NSF-supported curricula because more of the classes studied used the NSF-supported curriculum. These studies were not used in later analyses because they did not meet the requirements for the at least minimally methodologically adequate studies, as described below. The other, Peters (1992), compared two commercially generated curricula, and was coded in that category under the primary program of focus. Therefore, of the 95 comparative studies, 67 studies were coded as NSF-supported curricula and 28 were coded as commercially generated materials.

The 11 evaluation studies of the UCSMP secondary program that we reviewed, not including White et al. and Zahrt as previously mentioned, benefit from the maturity of the program, while demonstrating an orientation to both establishing effectiveness and improving a product line. For these reasons, at times we will present the summary of UCSMP’s data separately.

The Saxon materials also present a somewhat different profile from the other commercially generated materials because many of the evaluations of these materials were conducted in the 1980s and the materials were originally developed with a rather atypical program theory. Saxon (1981) designed its algebra materials to combine distributed practice with incremental development. We selected the Saxon materials as a middle grades commercially generated program, and limited its review to middle school studies from 1989 onward when the first National Council of Teachers of Mathematics (NCTM) Standards (NCTM, 1989) were released. This eliminated concerns that the materials or the conditions of educational practice have been altered during the intervening time period. The Saxon materials explicitly do not draw from the NCTM Standards nor did they receive support from the NSF; thus they truly represent a commercial venture. As a result, we categorized the Saxon studies within the group of studies of commercial materials.

At times in this report, we describe characteristics of the database by

example of comparative study thesis title

FIGURE 5-1 The distribution of comparative studies across programs. Programs are coded by grade band: black bars = elementary, white bars = middle grades, and gray bars = secondary. In this figure, there are six studies that involved two programs and one study that involved three programs.

NOTE: Five programs (MathScape, MMAP, MMOW/ARISE, Addison-Wesley, and Harcourt) are not shown above since no comparative studies were reviewed.

particular curricular program evaluations, in which case all 19 programs are listed separately. At other times, when we seek to inform ourselves on policy-related issues of funding and evaluating curricular materials, we use the NSF-supported, commercially generated, and UCSMP distinctions. We remind the reader of the artificial aspects of this distinction because at the present time, 18 of the 19 curricula are published commercially. In order to track the question of historical inception and policy implications, a distinction is drawn between the three categories. Figure 5-1 shows the distribution of comparative studies across the 14 programs.

The first result the committee wishes to report is the uneven distribution of studies across the curricula programs. There were 67 coded studies of the NSF curricula, 11 studies of UCSMP, and 17 studies of the commercial publishers. The 14 evaluation studies conducted on the Saxon materials compose the bulk of these 17-non-UCSMP and non-NSF-supported curricular evaluation studies. As these results suggest, we know more about the

evaluations of the NSF-supported curricula and UCSMP than about the evaluations of the commercial programs. We suggest that three factors account for this uneven distribution of studies. First, evaluations have been funded by the NSF both as a part of the original call, and as follow-up to the work in the case of three supplemental awards to two of the curricula programs. Second, most NSF-supported programs and UCSMP were developed at university sites where there is access to the resources of graduate students and research staff. Finally, there was some reported reluctance on the part of commercial companies to release studies that could affect perceptions of competitive advantage. As Figure 5-1 shows, there were quite a few comparative studies of Everyday Mathematics (EM), Connected Mathematics Project (CMP), Contemporary Mathematics in Context (Core-Plus Mathematics Project [CPMP]), Interactive Mathematics Program (IMP), UCSMP, and Saxon.

In the programs with many studies, we note that a significant number of studies were generated by a core set of authors. In some cases, the evaluation reports follow a relatively uniform structure applied to single schools, generating multiple studies or following cohorts over years. Others use a standardized evaluation approach to evaluate sequential courses. Any reports duplicating exactly the same sample, outcome measures, or forms of analysis were eliminated. For example, one study of Mathematics Trailblazers (Carter et al., 2002) reanalyzed the data from the larger ARC Implementation Center study (Sconiers et al., 2002), so it was not included separately. Synthesis studies referencing a variety of evaluation reports are summarized in Chapter 6 , but relevant individual studies that were referenced in them were sought out and included in this comparative review.

Other less formal comparative studies are conducted regularly at the school or district level, but such studies were not included in this review unless we could obtain formal reports of their results, and the studies met the criteria outlined for inclusion in our database. In our conclusions, we address the issue of how to collect such data more systematically at the district or state level in order to subject the data to the standards of scholarly peer review and make it more systematically and fairly a part of the national database on curricular effectiveness.

A standard for evaluation of any social program requires that an impact assessment is warranted only if two conditions are met: (1) the curricular program is clearly specified, and (2) the intervention is well implemented. Absent this assurance, one must have a means of ensuring or measuring treatment integrity in order to make causal inferences. Rossi et al. (1999, p. 238) warned that:

two prerequisites [must exist] for assessing the impact of an intervention. First, the program’s objectives must be sufficiently well articulated to make

it possible to specify credible measures of the expected outcomes, or the evaluator must be able to establish such a set of measurable outcomes. Second, the intervention should be sufficiently well implemented that there is no question that its critical elements have been delivered to appropriate targets. It would be a waste of time, effort, and resources to attempt to estimate the impact of a program that lacks measurable outcomes or that has not been properly implemented. An important implication of this last consideration is that interventions should be evaluated for impact only when they have been in place long enough to have ironed out implementation problems.

These same conditions apply to evaluation of mathematics curricula. The comparative studies in this report varied in the quality of documentation of these two conditions; however, all addressed them to some degree or another. Initially by reviewing the studies, we were able to identify one general design template, which consisted of seven critical decision points and determined that it could be used to develop a framework for conducting our meta-analysis. The seven critical decision points we identified initially were:

Choice of type of design: experimental or quasi-experimental;

For those studies that do not use random assignment: what methods of establishing comparability of groups were built into the design—this includes student characteristics, teacher characteristics, and the extent to which professional development was involved as part of the definition of a curriculum;

Definition of the appropriate unit of analysis (students, classes, teachers, schools, or districts);

Inclusion of an examination of implementation components;

Definition of the outcome measures and disaggregated results by program;

The choice of statistical tests, including statistical significance levels and effect size; and

Recognition of limitations to generalizability resulting from design choices.

These are critical decisions that affect the quality of an evaluation. We further identified a subset of these evaluation studies that met a set of minimum conditions that we termed at least minimally methodologically adequate studies. Such studies are those with the greatest likelihood of shedding light on the effectiveness of these programs. To be classified as at least minimally methodologically adequate, and therefore to be considered for further analysis, each evaluation study was required to:

Include quantifiably measurable outcomes such as test scores, responses to specified cognitive tasks of mathematical reasoning, performance evaluations, grades, and subsequent course taking; and

Provide adequate information to judge the comparability of samples. In addition, a study must have included at least one of the following additional design elements:

A report of implementation fidelity or professional development activity;

Results disaggregated by content strands or by performance by student subgroups; and/or

Multiple outcome measures or precise theoretical analysis of a measured construct, such as number sense, proof, or proportional reasoning.

Using this rubric, the committee identified a subset of 63 comparative studies to classify as at least minimally methodologically adequate and to analyze in depth to inform the conduct of future evaluations. There are those who would argue that any threat to the validity of a study discredits the findings, thus claiming that until we know everything, we know nothing. Others would claim that from the myriad of studies, examining patterns of effects and patterns of variation, one can learn a great deal, perhaps tentatively, about programs and their possible effects. More importantly, we can learn about methodologies and how to concentrate and focus to increase the likelihood of learning more quickly. As Lipsey (1997, p. 22) wrote:

In the long run, our most useful and informative contribution to program managers and policy makers and even to the evaluation profession itself may be the consolidation of our piecemeal knowledge into broader pictures of the program and policy spaces at issue, rather than individual studies of particular programs.

We do not wish to imply that we devalue studies of student affect or conceptions of mathematics, but decided that unless these indicators were connected to direct indicators of student learning, we would eliminate them from further study. As a result of this sorting, we eliminated 19 studies of NSF-supported curricula and 13 studies of commercially generated curricula. Of these, 4 were eliminated for their sole focus on affect or conceptions, 3 were eliminated for their comparative focus on outcomes other than achievement, such as teacher-related variables, and 19 were eliminated for their failure to meet the minimum additional characteristics specified in the criteria above. In addition, six others were excluded from the studies of commercial materials because they were not conducted within the grade-

level band specified by the committee for the selection of that program. From this point onward, all references can be assumed to refer to at least minimally methodologically adequate unless a study is referenced for illustration, in which case we label it with “EX” to indicate that it is excluded in the summary analyses. Studies labeled “EX” are occasionally referenced because they can provide useful information on certain aspects of curricular evaluation, but not on the overall effectiveness.

The at least minimally methodologically adequate studies reported on a variety of grade levels. Figure 5-2 shows the different grade levels of the studies. At times, the choice of grade levels was dictated by the years in which high-stakes tests were given. Most of the studies reported on multiple grade levels, as shown in Figure 5-2 .

Using the seven critical design elements of at least minimally methodologically adequate studies as a design template, we describe the overall database and discuss the array of choices on critical decision points with examples. Following that, we report on the results on the at least minimally methodologically adequate studies by program type. To do so, the results of each study were coded as either statistically significant or not. Those studies

example of comparative study thesis title

FIGURE 5-2 Single-grade studies by grade and multigrade studies by grade band.

that contained statistically significant results were assigned a percentage of outcomes that are positive (in favor of the treatment curriculum) based on the number of statistically significant comparisons reported relative to the total number of comparisons reported, and a percentage of outcomes that are negative (in favor of the comparative curriculum). The remaining were coded as the percentage of outcomes that are non significant. Then, using seven critical decision points as filters, we identified and examined more closely sets of studies that exhibited the strongest designs, and would therefore be most likely to increase our confidence in the validity of the evaluation. In this last section, we consider alternative hypotheses that could explain the results.

The committee emphasizes that we did not directly evaluate the materials. We present no analysis of results aggregated across studies by naming individual curricular programs because we did not consider the magnitude or rigor of the database for individual programs substantial enough to do so. Nevertheless, there are studies that provide compelling data concerning the effectiveness of the program in a particular context. Furthermore, we do report on individual studies and their results to highlight issues of approach and methodology and to remain within our primary charge, which was to evaluate the evaluations, we do not summarize results of the individual programs.

DESCRIPTION OF COMPARATIVE STUDIES DATABASE ON CRITICAL DECISION POINTS

An experimental or quasi-experimental design.

We separated the studies into experimental and quasiexperimental, and found that 100 percent of the studies were quasiexperimental (Campbell and Stanley, 1966; Cook and Campbell, 1979; and Rossi et al., 1999). 1 Within the quasi-experimental studies, we identified three subcategories of comparative study. In the first case, we identified a study as cross-curricular comparative if it compared the results of curriculum A with curriculum B. A few studies in this category also compared two samples within the curriculum to each other and specified different conditions such as high and low implementation quality.

A second category of a quasi-experimental study involved comparisons that could shed light on effectiveness involving time series studies. These studies compared the performance of a sample of students in a curriculum

  

One study, by Peters (1992), used random assignment to two classrooms, but was classified as quasi-experimental with its sample size and use of qualitative methods.

example of comparative study thesis title

FIGURE 5-3 The number of comparative studies in each category.

under investigation across time, such as in a longitudinal study of the same students over time. A third category of comparative study involved a comparison to some form of externally normed results, such as populations taking state, national, or international tests or prior research assessment from a published study or studies. We categorized these studies and divided them into NSF, UCSMP, and commercial and labeled them by the categories above ( Figure 5-3 ).

In nearly all studies in the comparative group, the titles of experimental curricula were explicitly identified. The only exception to this was the ARC Implementation Center study (Sconiers et al., 2002), where three NSF-supported elementary curricula were examined, but in the results, their effects were pooled. In contrast, in the majority of the cases, the comparison curriculum is referred to simply as “traditional.” In only 22 cases were comparisons made between two identified curricula. Many others surveyed the array of curricula at comparison schools and reported on the most frequently used, but did not identify a single curriculum. This design strategy is used often because other factors were used in selecting comparison groups, and the additional requirement of a single identified curriculum in

these sites would often make it difficult to match. Studies were categorized into specified (including a single or multiple identified curricula) and nonspecified curricula. In the 63 studies, the central group was compared to an NSF-supported curriculum (1), an unnamed traditional curriculum (41), a named traditional curriculum (19), and one of the six commercial curricula (2). To our knowledge, any systematic impact of such a decision on results has not been studied, but we express concern that when a specified curriculum is compared to an unspecified content which is a set of many informal curriculum, the comparison may favor the coherency and consistency of the single curricula, and we consider this possibility subsequently under alternative hypotheses. We believe that a quality study should at least report the array of curricula that comprise the comparative group and include a measure of the frequency of use of each, but a well-defined alternative is more desirable.

If a study was both longitudinal and comparative, then it was coded as comparative. When studies only examined performances of a group over time, such as in some longitudinal studies, it was coded as quasi-experimental normed. In longitudinal studies, the problems created by student mobility were evident. In one study, Carroll (2001), a five-year longitudinal study of Everyday Mathematics, the sample size began with 500 students, 24 classrooms, and 11 schools. By 2nd grade, the longitudinal sample was 343. By 3rd grade, the number of classes increased to 29 while the number of original students decreased to 236 students. At the completion of the study, approximately 170 of the original students were still in the sample. This high rate of attrition from the study suggests that mobility is a major challenge in curricular evaluation, and that the effects of curricular change on mobile students needs to be studied as a potential threat to the validity of the comparison. It is also a challenge in curriculum implementation because students coming into a program do not experience its cumulative, developmental effect.

Longitudinal studies also have unique challenges associated with outcome measures, a study by Romberg et al. (in press) (EX) discussed one approach to this problem. In this study, an external assessment system and a problem-solving assessment system were used. In the External Assessment System, items from the National Assessment of Educational Progress (NAEP) and Third International Mathematics and Science Survey (TIMSS) were balanced across four strands (number, geometry, algebra, probability and statistics), and 20 items of moderate difficulty, called anchor items, were repeated on each grade-specific assessment (p. 8). Because the analyses of the results are currently under way, the evaluators could not provide us with final results of this study, so it is coded as EX.

However, such longitudinal studies can provide substantial evidence of the effects of a curricular program because they may be more sensitive to an

TABLE 5-1 Scores in Percentage Correct by Everyday Mathematics Students and Various Comparison Groups Over a Five-Year Longitudinal Study

 

Sample Size

1st Grade

2nd Grade

3rd Grade

4th Grade

5th Grade

EM

n=170-503

58

62

61

71

75

Traditional U.S.

n=976

43

53.5

 

 

44

Japanese

n=750

64

71

 

 

80

Chinese

n=1,037

52

 

 

 

76

NAEP Sample

n=18,033

 

 

44

44

 

NOTE: 1st grade: 44 items; 2nd grade: 24 items; 3rd grade: 22 items; 4th grade: 29 items; and 5th grade: 33 items.

SOURCE: Adapted from Carroll (2001).

accumulation of modest effects and/or can reveal whether the rates of learning change over time within curricular change.

The longitudinal study by Carroll (2001) showed that the effects of curricula may often accrue over time, but measurements of achievement present challenges to drawing such conclusions as the content and grade level change. A variety of measures were used over time to demonstrate growth in relation to comparison groups. The author chose a set of measures used previously in studies involving two Asian samples and an American sample to provide a contrast to the students in EM over time. For 3rd and 4th grades, where the data from the comparison group were not available, the authors selected items from the NAEP to bridge the gap. Table 5-1 summarizes the scores of the different comparative groups over five years. Scores are reported as the mean percentage correct for a series of tests on number computation, number concepts and applications, geometry, measurement, and data analysis.

It is difficult to compare performances on different tests over different groups over time against a single longitudinal group from EM, and it is not possible to determine whether the students’ performance is increasing or whether the changes in the tests at each grade level are producing the results; thus the results from longitudinal studies lacking a control group or use of sophisticated methodological analysis may be suspect and should be interpreted with caution.

In the Hirsch and Schoen (2002) study, based on a sample of 1,457 students, scores on Ability to Do Quantitative Thinking (ITED-Q) a subset of the Iowa Tests of Education Development, students in Core-Plus showed increasing performance over national norms over the three-year time period. The authors describe the content of the ITED-Q test and point out

that “although very little symbolic algebra is required, the ITED-Q is quite demanding for the full range of high school students” (p. 3). They further point out that “[t]his 3-year pattern is consistent, on average, in rural, urban, and suburban schools, for males and females, for various minority groups, and for students for whom English was not their first language” (p. 4). In this case, one sees that studies over time are important as results over shorter periods may mask cumulative effects of consistent and coherent treatments and such studies could also show increases that do not persist when subject to longer trajectories. One approach to longitudinal studies was used by Webb and Dowling in their studies of the Interactive Mathematics Program (Webb and Dowling, 1995a, 1995b, 1995c). These researchers conducted transcript analyses as a means to examine student persistence and success in subsequent course taking.

The third category of quasi-experimental comparative studies measured student outcomes on a particular curricular program and simply compared them to performance on national tests or international tests. When these tests were of good quality and were representative of a genuine sample of a relevant population, such as NAEP reports or TIMSS results, the reports often provided one a reasonable indicator of the effects of the program if combined with a careful description of the sample. Also, sometimes the national tests or state tests used were norm-referenced tests producing national percentiles or grade-level equivalents. The normed studies were considered of weaker quality in establishing effectiveness, but were still considered valid as examples of comparing samples to populations.

For Studies That Do Not Use Random Assignment: What Methods of Establishing Comparability Across Groups Were Built into the Design

The most fundamental question in an evaluation study is whether the treatment has had an effect on the chosen criterion variable. In our context, the treatment is the curriculum materials, and in some cases, related professional development, and the outcome of interest is academic learning. To establish if there is a treatment effect, one must logically rule out as many other explanations as possible for the differences in the outcome variable. There is a long tradition on how this is best done, and the principle from a design point of view is to assure that there are no differences between the treatment conditions (especially in these evaluations, often there are only the new curriculum materials to be evaluated and a control group) either at the outset of the study or during the conduct of the study.

To ensure the first condition, the ideal procedure is the random assignment of the appropriate units to the treatment conditions. The second condition requires that the treatment is administered reliably during the length of the study, and is assured through the careful observation and

control of the situation. Without randomization, there are a host of possible confounding variables that could differ among the treatment conditions and that are related themselves to the outcome variables. Put another way, the treatment effect is a parameter that the study is set up to estimate. Statistically, an estimate that is unbiased is desired. The goal is that its expected value over repeated samplings is equal to the true value of the parameter. Without randomization at the onset of a study, there is no way to assure this property of unbiasness. The variables that differ across treatment conditions and are related to the outcomes are confounding variables, which bias the estimation process.

Only one study we reviewed, Peters (1992), used randomization in the assignment of students to treatments, but that occurred because the study was limited to one teacher teaching two sections and included substantial qualitative methods, so we coded it as quasi-experimental. Others report partially assigning teachers randomly to treatment conditions (Thompson, et al., 2001; Thompson et al., 2003). Two primary reasons seem to account for a lack of use of pure experimental design. To justify the conduct and expense of a randomized field trial, the program must be described adequately and there must be relative assurance that its implementation has occurred over the duration of the experiment (Peterson et al., 1999). Additionally, one must be sure that the outcome measures are appropriate for the range of performances in the groups and valid relative to the curricula under investigation. Seldom can such conditions be assured for all students and teachers and over the duration of a year or more.

A second reason is that random assignment of classrooms to curricular treatment groups typically is not permitted or encouraged under normal school conditions. As one evaluator wrote, “Building or district administrators typically identified teachers who would be in the study and in only a few cases was random assignment of teachers to UCSMP Algebra or comparison classes possible. School scheduling and teacher preference were more important factors to administrators and at the risk of losing potential sites, we did not insist on randomization” (Mathison et al., 1989, p. 11).

The Joint Committee on Standards for Educational Evaluation (1994, p. 165) committee of evaluations recognized the likelihood of limitations on randomization, writing:

The groups being compared are seldom formed by random assignment. Rather, they tend to be natural groupings that are likely to differ in various ways. Analytical methods may be used to adjust for these initial differences, but these methods are based upon a number of assumptions. As it is often difficult to check such assumptions, it is advisable, when time and resources permit, to use several different methods of analysis to determine whether a replicable pattern of results is obtained.

Does the dearth of pure experimentation render the results of the studies reviewed worthless? Bias is not an “either-or” proposition, but it is a quantity of varying degrees. Through careful measurement of the most salient potential confounding variables, precise theoretical description of constructs, and use of these methods of statistical analysis, it is possible to reduce the amount of bias in the estimated treatment effect. Identification of the most likely confounding variables and their measurement and subsequent adjustments can greatly reduce bias and help estimate an effect that is likely to be more reflective of the true value. The theoretical fully specified model is an alternative to randomization by including relevant variables and thus allowing the unbiased estimation of the parameter. The only problem is realizing when the model is fully specified.

We recognized that we can never have enough knowledge to assure a fully specified model, especially in the complex and unstable conditions of schools. However, a key issue in determining the degree of confidence we have in these evaluations is to examine how they have identified, measured, or controlled for such confounding variables. In the next sections, we report on the methods of the evaluators in identifying and adjusting for such potential confounding variables.

One method to eliminate confounding variables is to examine the extent to which the samples investigated are equated either by sample selection or by methods of statistical adjustments. For individual students, there is a large literature suggesting the importance of social class to achievement. In addition, prior achievement of students must be considered. In the comparative studies, investigators first identified participation of districts, schools, or classes that could provide sufficient duration of use of curricular materials (typically two years or more), availability of target classes, or adequate levels of use of program materials. Establishing comparability was a secondary concern.

These two major factors were generally used in establishing the comparability of the sample:

Student population characteristics, such as demographic characteristics of students in terms of race/ethnicity, economic levels, or location type (urban, suburban, or rural).

Performance-level characteristics such as performance on prior tests, pretest performance, percentage passing standardized tests, or related measures (e.g., problem solving, reading).

In general, four methods of comparing groups were used in the studies we examined, and they permit different degrees of confidence in their results. In the first type, a matching class, school, or district was identified.

Studies were coded as this type if specified characteristics were used to select the schools systematically. In some of these studies, the methodology was relatively complex as correlates of performance on the outcome measures were found empirically and matches were created on that basis (Schneider, 2000; Riordan and Noyce, 2001; and Sconiers et al., 2002). For example, in the Sconiers et al. study, where the total sample of more than 100,000 students was drawn from five states and three elementary curricula are reviewed (Everyday Mathematics, Math Trailblazers [MT], and Investigations [IN], a highly systematic method was developed. After defining eligibility as a “reform school,” evaluators conducted separate regression analyses for the five states at each tested grade level to identify the strongest predictors of average school mathematics score. They reported, “reading score and low-income variables … consistently accounted for the greatest percentage of total variance. These variables were given the greatest weight in the matching process. Other variables—such as percent white, school mobility rate, and percent with limited English proficiency (LEP)—accounted for little of the total variance but were typically significant. These variables were given less weight in the matching process” (Sconiers et al., 2002, p. 10). To further provide a fair and complete comparison, adjustments were made based on regression analysis of the scores to minimize bias prior to calculating the difference in scores and reporting effect sizes. In their results the evaluators report, “The combined state-grade effect sizes for math and total are virtually identical and correspond to a percentile change of about 4 percent favoring the reform students” (p. 12).

A second type of matching procedure was used in the UCSMP evaluations. For example, in an evaluation centered on geometry learning, evaluators advertised in NCTM and UCSMP publications, and set conditions for participation from schools using their program in terms of length of use and grade level. After selecting schools with heterogeneous grouping and no tracking, the researchers used a match-pair design where they selected classes from the same school on the basis of mathematics ability. They used a pretest to determine this, and because the pretest consisted of two parts, they adjusted their significance level using the Bonferroni method. 2 Pairs were discarded if the differences in means and variance were significant for all students or for those students completing all measures, or if class sizes became too variable. In the algebra study, there were 20 pairs as a result of the matching, and because they were comparing three experimental conditions—first edition, second edition, and comparison classes—in the com-

  

The Bonferroni method is a simple method that allows multiple comparison statements to be made (or confidence intervals to be constructed) while still assuring that an overall confidence coefficient is maintained.

parison study relevant to this review, their matching procedure identified 8 pairs. When possible, teachers were assigned randomly to treatment conditions. Most results are presented with the eight identified pairs and an accumulated set of means. The outcomes of this particular study are described below in a discussion of outcome measures (Thompson et al., 2003).

A third method was to measure factors such as prior performance or socio-economic status (SES) based on pretesting, and then to use analysis of covariance or multiple regression in the subsequent analysis to factor in the variance associated with these factors. These studies were coded as “control.” A number of studies of the Saxon curricula used this method. For example, Rentschler (1995) conducted a study of Saxon 76 compared to Silver Burdett with 7th graders in West Virginia. He reported that the groups differed significantly in that the control classes had 65 percent of the students on free and reduced-price lunch programs compared to 55 percent in the experimental conditions. He used scores on California Test of Basic Skills mathematics computation and mathematics concepts and applications as his pretest scores and found significant differences in favor of the experimental group. His posttest scores showed the Saxon experimental group outperformed the control group on both computation and concepts and applications. Using analysis of covariance, the computation difference in favor of the experimental group was statistically significant; however, the difference in concepts and applications was adjusted to show no significant difference at the p < .05 level.

A fourth method was noted in studies that used less rigorous methods of selection of sample and comparison of prior achievement or similar demographics. These studies were coded as “compare.” Typically, there was no explicit procedure to decide if the comparison was good enough. In some of the studies, it appeared that the comparison was not used as a means of selection, but rather as a more informal device to convince the reader of the plausibility of the equivalence of the groups. Clearly, the studies that used a more precise method of selection were more likely to produce results on which one’s confidence in the conclusions is greater.

Definition of Unit of Analysis

A major decision in forming an evaluation design is the unit of analysis. The unit of selection or randomization used to assign elements to treatment and control groups is closely linked to the unit of analysis. As noted in the National Research Council (NRC) report (1992, p. 21):

If one carries out the assignment of treatments at the level of schools, then that is the level that can be justified for causal analysis. To analyze the results at the student level is to introduce a new, nonrandomized level into

the study, and it raises the same issues as does the nonrandomized observational study…. The implications … are twofold. First, it is advisable to use randomization at the level at which units are most naturally manipulated. Second, when the unit of observation is at a “lower” level of aggregation than the unit of randomization, then for many purposes the data need to be aggregated in some appropriate fashion to provide a measure that can be analyzed at the level of assignment. Such aggregation may be as simple as a summary statistic or as complex as a context-specific model for association among lower-level observations.

In many studies, inadequate attention was paid to the fact that the unit of selection would later become the unit of analysis. The unit of analysis, for most curriculum evaluators, needs to be at least the classroom, if not the school or even the district. The units must be independently responding units because instruction is a group process. Students are not independent, the classroom—even if the teachers work together in a school on instruction—is not entirely independent, so the school is the unit. Care needed to be taken to ensure that an adequate numbers of units would be available to have sufficient statistical power to detect important differences.

A curriculum is experienced by students in a group, and this implies that individual student responses and what they learn are correlated. As a result, the appropriate unit of assignment and analysis must at least be defined at the classroom or teacher level. Other researchers (Bryk et al., 1993) suggest that the unit might be better selected at an even higher level of aggregation. The school itself provides a culture in which the curriculum is enacted as it is influenced by the policies and assignments of the principal, by the professional interactions and governance exhibited by the teachers as a group, and by the community in which the school resides. This would imply that the school might be the appropriate unit of analysis. Even further, to the extent that such decisions about curriculum are made at the district level and supported through resources and professional development at that level, the appropriate unit could arguably be the district. On a more practical level, we found that arguments can be made for a variety of decisions on the selection of units, and what is most essential is to make a clear argument for one’s choice, to use the same unit in the analysis as in the sample selection process, and to recognize the potential limits to generalization that result from one’s decisions.

We would argue in all cases that reports of how sites are selected must be explicit in the evaluation report. For example, one set of evaluation studies selected sites by advertisements in a journal distributed by the program and in NCTM journals (UCSMP) (Thompson et al., 2001; Thompson et al., 2003). The samples in their studies tended to be affluent suburban populations and predominantly white populations. Other conditions of inclusion, such as frequency of use also might have influenced this outcome,

but it is important that over a set of studies on effectiveness, all populations of students be adequately sampled. When a study is not randomized, adjustments for these confounding variables should be included. In our analysis of equity, we report on the concerns about representativeness of the overall samples and their impact on the generalizability of the results.

Implementation Components

The complexity of doing research on curricular materials introduces a number of possible confounding variables. Due to the documented complexity of curricular implementation, most comparative study evaluators attempt to monitor implementation in some fashion. A valuable outcome of a well-conducted evaluation is to determine not only if the experimental curriculum could ideally have a positive impact on learning, but whether it can survive or thrive in the conditions of schooling that are so variable across sites. It is essential to know what the treatment was, whether it occurred, and if so, to what degree of intensity, fidelity, duration, and quality. In our model in Chapter 3 , these factors were referred to as “implementation components.” Measuring implementation can be costly for large-scale comparative studies; however, many researchers have shown that variation in implementation is a key factor in determining effectiveness. In coding the comparative studies, we identified three types of components that help to document the character of the treatment: implementation fidelity, professional development treatments, and attention to teacher effects.

Implementation Fidelity

Implementation fidelity is a measure of the basic extent of use of the curricular materials. It does not address issues of instructional quality. In some studies, implementation fidelity is synonymous with “opportunity to learn.” In examining implementation fidelity, a variety of data were reported, including, most frequently, the extent of coverage of the curricular material, the consistency of the instructional approach to content in relation to the program’s theory, reports of pedagogical techniques, and the length of use of the curricula at the sample sites. Other less frequently used approaches documented the calendar of curricular coverage, requested teacher feedback by textbook chapter, conducted student surveys, and gauged homework policies, use of technology, and other particular program elements. Interviews with teachers and students, classroom surveys, and observations were the most frequently used data-gathering techniques. Classroom observations were conducted infrequently in these studies, except in cases when comparative studies were combined with case studies, typically with small numbers of schools and classes where observations

were conducted for long or frequent time periods. In our analysis, we coded only the presence or absence of one or more of these methods.

If the extent of implementation was used in interpreting the results, then we classified the study as having adjusted for implementation differences. Across all 63 at least minimally methodologically adequate studies, 44 percent reported some type of implementation fidelity measure, 3 percent reported and adjusted for it in interpreting their outcome measures, and 53 percent recorded no information on this issue. Differences among studies, by study type (NSF, UCSMP, and commercially generated), showed variation on this issue, with 46 percent of NSF reporting or adjusting for implementation, 75 percent of UCSMP, and only 11 percent of the other studies of commercial materials doing so. Of the commercial, non-UCSMP studies included, only one reported on implementation. Possibly, the evaluators for the NSF and UCSMP Secondary programs recognized more clearly that their programs demanded significant changes in practice that could affect their outcomes and could pose challenges to the teachers assigned to them.

A study by Abrams (1989) (EX) 3 on the use of Saxon algebra by ninth graders showed that concerns for implementation fidelity extend to all curricula, even those like Saxon whose methods may seem more likely to be consistent with common practice. Abrams wrote, “It was not the intent of this study to determine the effectiveness of the Saxon text when used as Saxon suggests, but rather to determine the effect of the text as it is being used in the classroom situations. However, one aspect of the research was to identify how the text is being taught, and how closely teachers adhere to its content and the recommended presentation” (p. 7). Her findings showed that for the 9 teachers and 300 students, treatment effects favoring the traditional group (using Dolciani’s Algebra I textbook, Houghton Mifflin, 1980) were found on the algebra test, the algebra knowledge/skills subtest, and the problem-solving test for this population of teachers (fixed effect). No differences were found between the groups on an algebra understanding/applications subtest, overall attitude toward mathematics, mathematical self-confidence, anxiety about mathematics, or enjoyment of mathematics. She suggests that the lack of differences might be due to the ways in which teachers supplement materials, change test conditions, emphasize

  

Both studies referenced in this section did not meet the criteria for inclusion in the comparative studies, but shed direct light on comparative issues of implementation. The Abrams study was omitted because it examined a program at a grade level outside the specified grade band for that curriculum. Briars and Resnick (2000) did not provide explicit comparison scores to permit one to evaluate the level of student attainment.

and deemphasize topics, use their own tests, vary the proportion of time spent on development and practice, use calculators and group work, and basically adapt the materials to their own interpretation and method. Many of these practices conflict directly with the recommendations of the authors of the materials.

A study by Briars and Resnick (2000) (EX) in Pittsburgh schools directly confronted issues relevant to professional development and implementation. Evaluators contrasted the performance of students of teachers with high and low implementation quality, and showed the results on two contrasting outcome measures, Iowa Test of Basic Skills (ITBS) and Balanced Assessment. Strong implementers were defined as those who used all of the EM components and provided student-centered instruction by giving students opportunities to explore mathematical ideas, solve problems, and explain their reasoning. Weak implementers were either not using EM or using it so little that the overall instruction in the classrooms was “hardly distinguishable from traditional mathematics instruction” (p. 8). Assignment was based on observations of student behavior in classes, the presence or absence of manipulatives, teacher questionnaires about the programs, and students’ knowledge of classroom routines associated with the program.

From the identification of strong- and weak-implementing teachers, strong- and weak-implementation schools were identified as those with strong- or weak-implementing teachers in 3rd and 4th grades over two consecutive years. The performance of students with 2 years of EM experience in these settings composed the comparative samples. Three pairs of strong- and weak-implementation schools with similar demographics in terms of free and reduced-price lunch (range 76 to 93 percent), student living with only one parent (range 57 to 82 percent), mobility (range 8 to 16 percent), and ethnicity (range 43 to 98 percent African American) were identified. These students’ 1st-grade ITBS scores indicated similarity in prior performance levels. Finally, evaluators predicted that if the effects were due to the curricular implementation and accompanying professional development, the effects on scores should be seen in 1998, after full implementation. Figure 5-4 shows that on the 1998 New Standards exams, placement in strong- and weak-implementation schools strongly affected students’ scores. Over three years, performance in the district on skills, concepts, and problem solving rose, confirming the evaluator’s predictions.

An article by McCaffrey et al. (2001) examining the interactions among instructional practices, curriculum, and student achievement illustrates the point that distinctions are often inadequately linked to measurement tools in their treatment of the terms traditional and reform teaching. In this study, researchers conducted an exploratory factor analysis that led them to create two scales for instructional practice: Reform Practices and Tradi-

example of comparative study thesis title

FIGURE 5-4 Percentage of students who met or exceeded the standard. Districtwide grade 4 New Standards Mathematics Reference Examination (NSMRE) performance for 1996, 1997, and 1998 by level of Everyday Mathematics implementation. Percentage of students who achieved the standard. Error bars denote the 99 percent confidence interval for each data point.

SOURCE: Re-created from Briars and Resnick (2000, pp. 19-20).

tional Practices. The reform scale measured the frequency, by means of teacher report, of teacher and student behaviors associated with reform instruction and assessment practices, such as using small-group work, explaining reasoning, representing and using data, writing reflections, or performing tasks in groups. The traditional scale focused on explanations to whole classes, the use of worksheets, practice, and short-answer assessments. There was a –0.32 correlation between scores for integrated curriculum teachers. There was a 0.27 correlation between scores for traditional

curriculum teachers. This shows that it is overly simplistic to think that reform and traditional practices are oppositional. The relationship among a variety of instructional practices is rather more complex as they interact with curriculum and various student populations.

Professional Development

Professional development and teacher effects were separated in our analysis from implementation fidelity. We recognized that professional development could be viewed by the readers of this report in two ways. As indicated in our model, professional development can be considered a program element or component or it can be viewed as part of the implementation process. When viewed as a program element, professional development resources are considered mandatory along with program materials. In relation to evaluation, proponents of considering professional development as a mandatory program element argue that curricular innovations, which involve the introduction of new topics, new types of assessment, or new ways of teaching, must make provision for adequate training, just as with the introduction of any new technology.

For others, the inclusion of professional development in the program elements without a concomitant inclusion of equal amounts of professional development relevant to a comparative treatment interjects a priori disproportionate treatments and biases the results. We hoped for an array of evaluation studies that might shed some empirical light on this dispute, and hence separated professional development from treatment fidelity, coding whether or not studies reported on the amount of professional development provided for the treatment and/or comparison groups. A study was coded as positive if it either reported on the professional development provided on the experimental group or reported the data on both treatments. Across all 63 at least minimally methodologically adequate studies, 27 percent reported some type of professional development measure, 1.5 percent reported and adjusted for it in interpreting their outcome measures, and 71.5 percent recorded no information on the issue.

A study by Collins (2002) (EX) 4 illustrates the critical and controversial role of professional development in evaluation. Collins studied the use of Connected Math over three years, in three middle schools in threat of being classified as low performing in the Massachusetts accountability system. A comparison was made between one school (School A) that engaged

  

The Collins study lacked a comparison group and is coded as EX. However, it is reported as a case study.

substantively in professional development opportunities accompanying the program and two that did not (Schools B and C). In the CMP school reports (School A) totals between 100 and 136 hours of professional development were recorded for all seven teachers in grades 6 through 8. In School B, 66 hours were reported for two teachers and in School C, 150 hours were reported for eight teachers over three years. Results showed significant differences in the subsequent performance by students at the school with higher participation in professional development (School A) and it became a districtwide top performer; the other two schools remained at risk for low performance. No controls for teacher effects were possible, but the results do suggest the centrality of professional development for successful implementation or possibly suggest that the results were due to professional development rather than curriculum materials. The fact that these two interpretations cannot be separated is a problem when professional development is given to one and not the other. The effect could be due to textbook or professional development or an interaction between the two. Research designs should be adjusted to consider these issues when different conditions of professional development are provided.

Teacher Effects

These studies make it obvious that there are potential confounding factors of teacher effects. Many evaluation studies devoted inadequate attention to the variable of teacher quality. A few studies (Goodrow, 1998; Riordan and Noyce, 2001; Thompson et al., 2001; and Thompson et al., 2003) reported on teacher characteristics such as certification, length of service, experience with curricula, or degrees completed. Those studies that matched classrooms and reported by matched results rather than aggregated results sought ways to acknowledge the large variations among teacher performance and its impact on student outcomes. We coded any effort to report on possible teacher effects as one indicator of quality. Across all 63 at least minimally methodologically adequate studies, 16 percent reported some type of teacher effect measure, 3 percent reported and adjusted for it in interpreting their outcome measures, and 81 percent recorded no information on this issue.

One can see that the potential confounding factors of teacher effects, in terms of the provision of professional development or the measure of teacher effects, are not adequately considered in most evaluation designs. Some studies mention and give a subjective judgment as to the nature of the problem, but this is descriptive at the most. Hardly any of the studies actually do anything analytical, and because these are such important potential confounding variables, this presents a serious challenge to the efficacy of these studies. Figure 5-5 shows how attention to these factors varies

example of comparative study thesis title

FIGURE 5-5 Treatment of implementation components by program type.

NOTE: PD = professional development.

across program categories among NSF-supported, UCSMP, and studies of commercial materials. In general, evaluations of NSF-supported studies were the most likely to measure these variables; UCSMP had the most standardized use of methods to do so across studies; and commercial material evaluators seldom reported on issues of implementation fidelity.

Identification of a Set of Outcome Measures and Forms of Disaggregation

Using the selected student outcomes identified in the program theory, one must conduct an impact assessment that refers to the design and measurement of student outcomes. In addition to selecting what outcomes should be measured within one’s program theory, one must determine how these outcomes are measured, when those measures are collected, and what

purpose they serve from the perspective of the participants. In the case of curricular evaluation, there are significant issues involved in how these measures are reported. To provide insight into the level of curricular validity, many evaluators prefer to report results by topic, content strand, or item cluster. These reports often present the level of specificity of outcome needed to inform curriculum designers, especially when efforts are made to document patterns of errors, distribution of results across multiple choices, or analyses of student methods. In these cases, whole test scores may mask essential differences in impact among curricula at the level of content topics, reporting only average performance.

On the other hand, many large-scale assessments depend on methods of test equating that rely on whole test scores and make comparative interpretations of different test administrations by content strands of questionable reliability. Furthermore, there are questions such as whether to present only gain scores effect sizes, how to link pretests and posttests, and how to determine the relative curricular sensitivity of various outcome measures.

The findings of comparative studies are reported in terms of the outcome measure(s) collected. To describe the nature of the database with regard to outcome measures and to facilitate our analyses of the studies, we classified each of the included studies on four outcome measure dimensions:

Total score reported;

Disaggregation of content strands, subtest, performance level, SES, or gender;

Outcome measure that was specific to curriculum; and

Use of multiple outcome measures.

Most studies reported a total score, but we did find studies that reported only subtest scores or only scores on an item-by-item basis. For example, in the Ben-Chaim et al. (1998) evaluation study of Connected Math, the authors were interested in students’ proportional reasoning proficiency as a result of use of this curriculum. They asked students from eight seventh-grade classes of CMP and six seventh-grade classes from the control group to solve a variety of tasks categorized as rate and density problems. The authors provide precise descriptions of the cognitive challenges in the items; however, they do not explain if the problems written up were representative of performance on a larger set of items. A special rating form was developed to code responses in three major categories (correct answer, incorrect answer, and no response), with subcategories indicating the quality of the work that accompanied the response. No reports on reliability of coding were given. Performance on standardized tests indicated that control students’ scores were slightly higher than CMP at the beginning of the

year and lower at the end. Twenty-five percent of the experimental group members were interviewed about their approaches to the problems. The CMP students outperformed the control students (53 percent versus 28 percent) overall in providing the correct answers and support work, and 27 percent of the control group gave an incorrect answer or showed incorrect thinking compared to 13 percent of the CMP group. An item-level analysis permitted the researchers to evaluate the actual strategies used by the students. They reported, for example, that 82 percent of CMP students used a “strategy focused on package price, unit price, or a combination of the two; those effective strategies were used by only 56 of 91 control students (62 percent)” (p. 264).

The use of item or content strand-level comparative reports had the advantage that they permitted the evaluators to assess student learning strategies specific to a curriculum’s program theory. For example, at times, evaluators wanted to gauge the effectiveness of using problems different from those on typical standardized tests. In this case, problems were drawn from familiar circumstances but carefully designed to create significant cognitive challenges, and assess how well the informal strategies approach in CMP works in comparison to traditional instruction. The disadvantages of such an approach include the use of only a small number of items and the concerns for reliability in scoring. These studies seem to represent a method of creating hybrid research models that build on the detailed analyses possible using case studies, but still reporting on samples that provide comparative data. It possibly reflects the concerns of some mathematicians and mathematics educators that the effectiveness of materials needs to be evaluated relative to very specific, research-based issues on learning and that these are often inadequately measured by multiple-choice tests. However, a decision not to report total scores led to a trade-off in the reliability and representativeness of the reported data, which must be addressed to increase the objectivity of the reports.

Second, we coded whether outcome data were disaggregated in some way. Disaggregation involved reporting data on dimensions such as content strand, subtest, test item, ethnic group, performance level, SES, and gender. We found disaggregated results particularly helpful in understanding the findings of studies that found main effects, and also in examining patterns across studies. We report the results of the studies’ disaggregation by content strand in our reports of effects. We report the results of the studies’ disaggregation by subgroup in our discussions of generalizability.

Third, we coded whether a study used an outcome measure that the evaluator reported as being sensitive to a particular treatment—this is a subcategory of what was defined in our framework as “curricular validity of measures.” In such studies, the rationale was that readily available measures such as state-mandated tests, norm-referenced standardized tests, and

college entrance examinations do not measure some of the aims of the program under study. A frequently cited instance of this was that “off the shelf” instruments do not measure well students’ ability to apply their mathematical knowledge to problems embedded in complex settings. Thus, some studies constructed a collection of tasks that assessed this ability and collected data on it (Ben-Chaim et al., 1998; Huntley et al., 2000).

Finally, we recorded whether a study used multiple outcome measures. Some studies used a variety of achievement measures and other studies reported on achievement accompanied by measures such as subsequent course taking or various types of affective measures. For example, Carroll (2001, p. 47) reported results on a norm-referenced standardized achievement test as well as a collection of tasks developed in other studies.

A study by Huntley et al. (2000) illustrates how a variety of these techniques were combined in their outcome measures. They developed three assessments. The first emphasized contextualized problem solving based on items from the American Mathematical Association of Two-Year Colleges and others; the second assessment was on context-free symbolic manipulation and a third part requiring collaborative problem solving. To link these measures to the overall evaluation, they articulated an explicit model of cognition based on how one links an applied situation to mathematical activity through processes of formulation and interpretation. Their assessment strategy permitted them to investigate algebraic reasoning as an ability to use algebraic ideas and techniques to (1) mathematize quantitative problem situations, (2) use algebraic principles and procedures to solve equations, and (3) interpret results of reasoning and calculations.

In presenting their data comparing performance on Core-Plus and traditional curriculum, they presented both main effects and comparisons on subscales. Their design of outcome measures permitted them to examine differences in performance with and without context and to conclude with statements such as “This result illustrates that CPMP students perform better than control students when setting up models and solving algebraic problems presented in meaningful contexts while having access to calculators, but CPMP students do not perform as well on formal symbol-manipulation tasks without access to context cues or calculators” (p. 349). The authors go on to present data on the relationship between knowing how to plan or interpret solutions and knowing how to carry them out. The correlations between these variables were weak but significantly different (0.26 for control groups and 0.35 for Core-Plus). The advantage of using multiple measures carefully tied to program theory is that they can permit one to test fine content distinctions that are likely to be the level of adjustments necessary to fine tune and improve curricular programs.

Another interesting approach to the use of outcome measures is found in the UCSMP studies. In many of these studies, evaluators collected infor-

TABLE 5-2 Mean Percentage Correct on the Subject Tests

Treatment Group

Geometry—Standard

Geometry—UCSMP

Advanced Algebra—UCSMP

UCSMP

43.1, 44.7, 50.5

51.2, 54.5

56.1, 58.8, 56.1

Comparison

42.7, 45.5, 51.5

36.6, 40.8

42.0, 50.1, 50.0

“43.1, 44.7, 50.5” means students were correct on 43.1 percent of the total items, 44.7 percent of the fair items for UCSMP, and 50.5 percent of the items that were taught in both treatments.

Too few items to report data.

SOURCES: Adapted from Thompson et al. (2001); Thompson et al. (2003).

mation from teachers’ reports and chapter reviews as to whether topics for items on the posttests were taught, calling this an “opportunity to learn” measure. The authors reported results from three types of analyses: (1) total test scores, (2) fair test scores (scores reported by program but only on items on topics taught), and (3) conservative test scores (scores on common items taught in both). Table 5-2 reports on the variations across the multiple- choice test scores for the Geometry study (Thompson et al., 2003) on a standardized test, High School Subject Tests-Geometry Form B , and the UCSMP-constructed Geometry test, and for the Advanced Algebra Study on the UCSMP-constructed Advanced Algebra test (Thompson et al., 2001). The table shows the mean scores for UCSMP classes and comparison classes. In each cell, mean percentage correct is reported first by whole test, then by fair test, and then by conservative test.

The authors explicitly compare the items from the standard Geometry test with the items from the UCSMP test and indicate overlap and difference. They constructed their own test because, in their view, the standard test was not adequately balanced among skills, properties, and real-world uses. The UCSMP test included items on transformations, representations, and applications that were lacking in the national test. Only five items were taught by all teachers; hence in the case of the UCSMP geometry test, there is no report on a conservative test. In the Advanced Algebra evaluation, only a UCSMP-constructed test was viewed as appropriate to cover the treatment of the prior material and alignment to the goals of the new course. These data sets demonstrate the challenge of selecting appropriate outcome measures, the sensitivity of the results to those decisions, and the importance of full disclosure of decision-making processes in order to permit readers to assess the implications of the choices. The methodology utilized sought to ensure that the material in the course was covered adequately by treatment teachers while finding ways to make comparisons that reflected content coverage.

Only one study reported on its outcomes using embedded assessment items employed over the course of the year. In a study of Saxon and UCSMP, Peters (1992) (EX) studied the use of these materials with two classrooms taught by the same teacher. In this small study, he randomly assigned students to treatment groups and then measured their performance on four unit tests composed of items common to both curricula and their progress on the Orleans-Hanna Algebraic Prognosis Test.

Peters’ study showed no significant difference in placement scores between Saxon and UCSMP on the posttest, but did show differences on the embedded assessment. Figure 5-6 (Peters, 1992, p. 75) shows an interesting display of the differences on a “continuum” that shows both the direction and magnitude of the differences and provides a level of concept specificity missing in many reports. This figure and a display ( Figure 5-7 ) in a study by Senk (1991, p. 18) of students’ mean scores on Curriculum A versus Curriculum B with a 10 percent range of differences marked represent two excellent means to communicate the kinds of detailed content outcome information that promises to be informative to curriculum writers, publishers, and school decision makers. In Figure 5-7 , 16 items listed by number were taken from the Second International Mathematics Study. The Functions, Statistics, and Trigonometry sample averaged 41 percent correct on these items whereas the U.S. precalculus sample averaged 38 percent. As shown in the figure, differences of 10 percent or less fall inside the banded area and greater than 10 percent fall outside, producing a display that makes it easy for readers and designers to identify the relative curricular strengths and weaknesses of topics.

While we value detailed outcome measure information, we also recognize the importance of examining curricular impact on students’ standardized test performance. Many developers, but not all, are explicit in rejecting standardized tests as adequate measures of the outcomes of their programs, claiming that these tests focus on skills and manipulations, that they are overly reliant on multiple-choice questions, and that they are often poorly aligned to new content emphases such as probability and statistics, transformations, use of contextual problems and functions, and process skills, such as problem solving, representation, or use of calculators. However, national and state tests are being revised to include more content on these topics and to draw on more advanced reasoning. Furthermore, these high-stakes tests are of major importance in school systems, determining graduation, passing standards, school ratings, and so forth. For this reason, if a curricular program demonstrated positive impact on such measures, we referred to that in Chapter 3 as establishing “curricular alignment with systemic factors.” Adequate performance on these measures is of paramount importance to the survival of reform (to large groups of parents and

example of comparative study thesis title

FIGURE 5-6 Continuum of criterion score averages for studied programs.

SOURCE: Peters (1992, p. 75).

school administrators). These examples demonstrate how careful attention to outcomes measures is an essential element of valid evaluation.

In Table 5-3 , we document the number of studies using a variety of types of outcome measures that we used to code the data, and also report on the types of tests used across the studies.

example of comparative study thesis title

FIGURE 5-7 Achievement (percentage correct) on Second International Mathematics Study (SIMS) items by U.S. precalculus students and functions, statistics, and trigonometry (FST) students.

SOURCE: Re-created from Senk (1991, p. 18).

TABLE 5-3 Number of Studies Using a Variety of Outcome Measures by Program Type

 

Total Test

Content Strands

Test Match to Program

Multiple Test

 

Yes

No

Yes

No

Yes

No

Yes

No

NSF

43

3

28

18

26

20

21

25

Commercial

8

1

4

5

2

7

2

7

UCSMP

7

1

7

1

7

1

7

1

A Choice of Statistical Tests, Including Statistical Significance and Effect Size

In our first review of the studies, we coded what methods of statistical evaluation were used by different evaluators. Most common were t-tests; less frequently one found Analysis of Variance (ANOVA), Analysis of Co-

example of comparative study thesis title

FIGURE 5-8 Statistical tests most frequently used.

variance (ANCOVA), and chi-square tests. In a few cases, results were reported using multiple regression or hierarchical linear modeling. Some used multiple tests; hence the total exceeds 63 ( Figure 5-8 ).

One of the difficult aspects of doing curriculum evaluations concerns using the appropriate unit both in terms of the unit to be randomly assigned in an experimental study and the unit to be used in statistical analysis in either an experimental or quasi-experimental study.

For our purposes, we made the decision that unless the study concerned an intact student population such as the freshman at a single university, where a student comparison was the correct unit, we believed that for statistical tests, the unit should be at least at the classroom level. Judgments were made for each study as to whether the appropriate unit was utilized. This question is an important one because statistical significance is related to sample size, and as a result, studies that inappropriately use the student as the unit of analysis could be concluding significant differences where they are not present. For example, if achievement differences between two curricula are tested in 16 classrooms with 400 students, it will always be easier to show significant differences using scores from those 400 students than using 16 classroom means.

Fifty-seven studies used students as the unit of analysis in at least one test of significance. Three of these were coded as correct because they involved whole populations. In all, 10 studies were coded as using the

TABLE 5-4 Performance on Applied Algebra Problems with Use of Calculators, Part 1

Treatment

n

M (0-100)

SD

Control

273

34.1

14.8

CPMP

320

42.6

21.3

NOTE: t = -5.69, p < .001. All sites combined

SOURCE: Huntley et al. (2000). Reprinted with permission.

TABLE 5-5 Reanalysis of Algebra Performance Data

 

Site Mean

Independent Samples Dependent

Difference Sample Difference

Site

Control

CPMP

1

31.7

35.5

 

3.8

2

26.0

49.4

 

23.4

3

36.7

25.2

 

-11.5

4

41.9

47.7

 

5.8

5

29.4

38.3

 

8.9

6

30.5

45.6

 

15.1

Average

32.7

40.3

7.58

7.58

Standard deviation

5.70

9.17

7.64

11.75

Standard error

 

 

4.41

4.80

 

 

t

1.7

1.6

 

 

p

0.116

0.175

 

SOURCE: Huntley et al. (2000).

correct unit of analysis; hence, 7 studies used teachers or classes, or schools. For some studies where multiple tests were conducted, a judgment was made as to whether the primary conclusions drawn treated the unit of analysis adequately. For example, Huntley et al. (2000) compared the performance of CPMP students with students in a traditional course on a measure of ability to formulate and use algebraic models to answer various questions about relationships among variables. The analysis used students as the unit of analysis and showed a significant difference, as shown in Table 5-4 .

To examine the robustness of this result, we reanalyzed the data using an independent sample t-test and a matched pairs t-test with class means as the unit of analysis in both tests ( Table 5-5 ). As can be seen from the analyses, in neither statistical test was the difference between groups found to be significantly different (p < .05), thus emphasizing the importance of using the correct unit in analyzing the data.

Reanalysis of student-level data using class means will not always result

TABLE 5-6 Mean Percentage Correct on Entire Multiple-Choice Posttest: Second Edition and Non-UCSMP

School

Pair

UCSMP Second Edition

Code

ID

n

Mean

SD

OTL

J

18

18

60.8

9.0

100

J

19

11

58.8

13.5

100

K

20

22

63.8

13.0

94

K

21

16

64.8

14.0

94

L

22

19

57.6

16.9

92

L

23

13

44.7

11.2

92

M

24

29

58.4

12.7

92

M

25

22

39.6

13.5

92

Overall

 

150

56.1

15.4

 

NOTE: The mean is the mean percentage correct on a 36-item multiple-choice posttest. The OTL is the percentage of the items for which teachers reported their students had the opportunity to learn the needed content. Underline indicates statistically significant differences between the mean percentage correct for each pair.

in a change in finding. Furthermore, using class means as the unit of analysis does not suggest that significant differences will not be found. For example, a study by Thompson et al. (2001) compared the performance of UCSMP students with the performance of students in a more traditional program across several measures of achievement. They found significant differences between UCSMP students and the non-UCSMP students on several measures. Table 5-6 shows results of an analysis of a multiple-choice algebraic posttest using class means as the unit of analysis. Significant differences were found in five of eight separate classroom comparisons, as shown in the table. They also found a significant difference using a matched-pairs t-test on class means.

The lesson to be learned from these reanalyses is that the choice of unit of analysis and the way the data are aggregated can impact study findings in important ways including the extent to which these findings can be generalized. Thus it is imperative that evaluators pay close attention to such considerations as the unit of analysis and the way data are aggregated in the design, implementation, and analysis of their studies.

Non-UCSMP

n

Mean

SD

OTL

SE

t

df

p

14

55.2

10.2

69

3.40

1.65

30

0.110

15

53.7

11.0

69

4.81

1.06

24

0.299

24

45.9

10.0

72

3.41

5.22

44

23

43.0

11.9

72

4.16

5.23

37

20

38.8

9.1

75

4.32

4.36

37

15

38.3

11.0

75

4.20

1.52

26

0.140

22

37.8

13.8

47

3.72

5.56

49

23

30.8

9.9

47

3.52

2.51

43

156

42.0

13.1

 

 

 

 

 

A matched-pairs t-test indicates that the differences between the two curricula are significant.

SOURCE: Thompson et al. (2001). Reprinted with permission.

Second, effect size has become a relatively common and standard way of gauging the practical significance of the findings. Statistical significance only indicates whether the main-level differences between two curricula are large enough to not be due to chance, assuming they come from the same population. When statistical differences are found, the question remains as to whether such differences are large enough to consider. Because any innovation has its costs, the question becomes one of cost-effectiveness: Are the differences in student achievement large enough to warrant the costs of change? Quantifying the practical effect once statistical significance is established is one way to address this issue. There is a statistical literature for doing this, and for the purposes of this review, the committee simply noted whether these studies have estimated such an effect. However, the committee further noted that in conducting meta-analyses across these studies, effect size was likely to be of little value. These studies used an enormous variety of outcome measures, and even using effect size as a means to standardize units across studies is not sensible when the measures in each

study address such a variety of topics, forms of reasoning, content levels, and assessment strategies.

We note very few studies drew upon the advances in methodologies employed in modeling, which include causal modeling, hierarchical linear modeling (Bryk and Raudenbush, 1992; Bryk et al., 1993), and selection bias modeling (Heckman and Hotz, 1989). Although developing detailed specifications for these approaches is beyond the scope of this review, we wish to emphasize that these methodological advances should be considered within future evaluation designs.

Results and Limitations to Generalizability Resulting from Design Constraints

One also must consider what generalizations can be drawn from the results (Campbell and Stanley, 1966; Caporaso and Roos, 1973; and Boruch, 1997). Generalization is a matter of external validity in that it determines to what populations the study results are likely to apply. In designing an evaluation study, one must carefully consider, in the selection of units of analysis, how various characteristics of those units will affect the generalizability of the study. It is common for evaluators to conflate issues of representativeness for the purpose of generalizability (external validity) and comparativeness (the selection of or adjustment for comparative groups [internal validity]). Not all studies must be representative of the population served by mathematics curricula to be internally valid. But, to be generalizable beyond restricted communities, representativeness must be obtained by the random selection of the basic units. Clearly specifying such limitations to generalizability is critical. Furthermore, on the basis of equity considerations, one must be sure that if overall effectiveness is claimed, that the studies have been conducted and analyzed with reference of all relevant subgroups.

Thus, depending on the design of a study, its results may be limited in generalizability to other populations and circumstances. We identified four typical kinds of limitations on the generalizability of studies and coded them to determine, on the whole, how generalizable the results across studies might be.

First, there were studies whose designs were limited by the ability or performance level of the students in the samples. It was not unusual to find that when new curricula were implemented at the secondary level, schools kept in place systems of tracking that assigned the top students to traditional college-bound curriculum sequences. As a result, studies either used comparative groups who were matched demographically but less skilled than the population as a whole, in relation to prior learning, or their results compared samples of less well-prepared students to samples of students

with stronger preparations. Alternatively, some studies reported on the effects of curricula reform on gifted and talented students or on college-attending students. In these cases, the study results would also limit the generalizability of the results to similar populations. Reports using limited samples of students’ ability and prior performance levels were coded as a limitation to the generalizability of the study.

For example, Wasman (2000) conducted a study of one school (six teachers) and examined the students’ development of algebraic reasoning after one (n=100) and two years (n=73) in CMP. In this school, the top 25 percent of the students are counseled to take a more traditional algebra course, so her experimental sample, which was 61 percent white, 35 percent African American, 3 percent Asian, and 1 percent Hispanic, consisted of the lower 75 percent of the students. She reported on the student performance on the Iowa Algebraic Aptitude Test (IAAT) (1992), in the subcategories of interpreting information, translating symbols, finding relationships, and using symbols. Results for Forms 1 and 2 of the test, for the experimental and norm group, are shown in Table 5-7 for 8th graders.

In our coding of outcomes, this study was coded as showing no significant differences, although arguably its results demonstrate a positive set of

TABLE 5-7 Comparing Iowa Algebraic Aptitude Test (IAAT) Mean Scores of the Connected Mathematics Project Forms 1 and 2 to the Normative Group (8th Graders)

 

Interpreting Information

Translating Symbols

Finding Relationships

Using Symbols

Total

CMP: Form 1

9.35

8.22

9.90

8.65

36.12

7th (n=51)

(3.36)

(3.44)

(3.26)

(3.12)

(11.28)

CMP: Form 1

9.76

8.56

9.41

8.27

36.00

8th (n=41)

(3.89)

(3.64)

(4.13)

(3.74)

(13.65)

Norm: Form 1

10.03

9.55

9.14

8.87

37.59

(n=2,467)

(3.35)

(2.89)

(3.59)

(3.19)

(10.57)

CMP: Form 2

9.41

7.82

9.29

7.65

34.16

7th (n=49)

(4.05)

(3.03)

(3.57)

(3.35)

(11.47)

CMP: Form 2

11.28

8.66

10.94

9.81

40.69

8th (n=32)

(3.74)

(3.81)

(3.79)

(3.64)

(12.94)

Norm: Form 2

10.63

8.58

8.67

9.19

37.07

(n=2,467)

(3.78)

(2.91)

(3.84)

(3.17)

(11.05)

NOTE: Parentheses indicate standard deviation.

SOURCE: Adapted from Wasman (2000).

outcomes as the treatment group was weaker than the control group. Had the researcher used a prior achievement measure and a different statistical technique, significance might have been demonstrated, although potential teacher effects confound interpretations of results.

A second limitation to generalizability was when comparative studies resided entirely at curriculum pilot site locations, where such sites were developed as a means to conduct formative evaluations of the materials with close contact and advice from teachers. Typically, pilot sites have unusual levels of teacher support, whether it is in the form of daily technical support in the use of materials or technology or increased quantities of professional development. These sites are often selected for study because they have established cooperative agreements with the program developers and other sources of data, such as classroom observations, are already available. We coded whether the study was conducted at a pilot site to signal potential limitations in generalizability of the findings.

Third, studies were also coded as being of limited generalizability if they failed to disaggregate their data by socioeconomic class, race, gender, or some other potentially significant sources of restriction on the claims. We recorded the categories in which disaggregation occurred and compiled their frequency across the studies. Because of the need to open the pipeline to advanced study in mathematics by members of underrepresented groups, we were particularly concerned about gauging the extent to which evaluators factored such variables into their analysis of results and not just in terms of the selection of the sample.

Of the 46 included studies of NSF-supported curricula, 19 disaggregated their data by student subgroup. Nine of 17 studies of commercial materials disaggregated their data. Figure 5-9 shows the number of studies that disaggregated outcomes by race or ethnicity, SES, gender, LEP, special education status, or prior achievement. Studies using multiple categories of disaggregation were counted multiple times by program category.

The last category of restricted generalization occurred in studies of limited sample size. Although such studies may have provided more indepth observations of implementation and reports on professional development factors, the smaller numbers of classrooms and students in the study would limit the extent of generalization that could be drawn from it. Figure 5-10 shows the distribution of sizes of the samples in terms of numbers of students by study type.

Summary of Results by Student Achievement Among Program Types

We present the results of the studies as a means to further investigate their methodological implications. To this end, for each study, we counted across outcome measures the number of findings that were positive, nega-

example of comparative study thesis title

FIGURE 5-9 Disaggregation of subpopulations.

example of comparative study thesis title

FIGURE 5-10 Proportion of studies by sample size and program.

tive, or indeterminate (no significant difference) and then calculated the proportion of each. We represented the calculation of each study as a triplet (a, b, c) where a indicates the proportion of the results that were positive and statistically significantly stronger than the comparison program, b indicates the proportion that were negative and statistically significantly weaker than the comparison program, and c indicates the proportion that showed no significant difference between the treatment and the comparative group. For studies with a single outcome measure, without disaggregation by content strand, the triplet is always composed of two zeros and a single one. For studies with multiple measures or disaggregation by content strand, the triplet is typically a set of three decimal values that sum to one. For example, a study with one outcome measure in favor of the experimental treatment would be coded (1, 0, 0), while one with multiple measures and mixed results more strongly in favor of the comparative curriculum might be listed as (.20, .50, .30). This triplet would mean that for 20 percent of the comparisons examined, the evaluators reported statistically significant positive results, for 50 percent of the comparisons the results were statistically significant in favor of the comparison group, and for 30 percent of the comparisons no significant difference were found. Overall, the mean score on these distributions was (.54, .07, .40), indicating that across all the studies, 54 percent of the comparisons favored the treatment, 7 percent favored the comparison group, and 40 percent showed no significant difference. Table 5-8 shows the comparison by curricular program types. We present the results by individual program types, because each program type relies on a similar program theory and hence could lead to patterns of results that would be lost in combining the data. If the studies of commercial materials are all grouped together to include UCSMP, their pattern of results is (.38, .11, .51). Again we emphasize that due to our call for increased methodological rigor and the use of multiple methods, this result is not sufficient to establish the curricular effectiveness of these programs as a whole with adequate certainty.

We caution readers that these results are summaries of the results presented across a set of evaluations that meet only the standard of at least

TABLE 5-8 Comparison by Curricular Program Types

Proportion of Results That Are:

NSF-Supported n=46

UCSMP n=8

Commercially Generated n=9

In favor of treatment

.591

.491

.285

In favor of comparison

.055

.087

.130

Show no significant difference

.354

.422

.585

minimally methodologically adequate . Calculations of statistical significance of each program’s results were reported by the evaluators; we have made no adjustments for weaknesses in the evaluations such as inappropriate use of units of analysis in calculating statistical significance. Evaluations that consistently used the correct unit of analysis, such as UCSMP, could have fewer reports of significant results as a consequence. Furthermore, these results are not weighted by study size. Within any study, the results pay no attention to comparative effect size or to the established credibility of an outcome measure. Similarly, these results do not take into account differences in the populations sampled, an important consideration in generalizing the results. For example, using the same set of studies as an example, UCSMP studies used volunteer samples who responded to advertisements in their newsletters, resulting in samples with disproportionately Caucasian subjects from wealthier schools compared to national samples. As a result, we would suggest that these results are useful only as baseline data for future evaluation efforts. Our purpose in calculating these results is to permit us to create filters from the critical decision points and test how the results change as one applies more rigorous standards.

Given that none of the studies adequately addressed all of the critical criteria, we do not offer these results as definitive, only suggestive—a hypothesis for further study. In effect, given the limitations of time and support, and the urgency of providing advice related to policy, we offer this filtering approach as an informal meta-analytic technique sufficient to permit us to address our primary task, namely, evaluating the quality of the evaluation studies.

This approach reflects the committee’s view that to deeply understand and improve methodology, it is necessary to scrutinize the results and to determine what inferences they provide about the conduct of future evaluations. Analogous to debates on consequential validity in testing, we argue that to strengthen methodology, one must consider what current methodologies are able (or not able) to produce across an entire series of studies. The remainder of the chapter is focused on considering in detail what claims are made by these studies, and how robust those claims are when subjected to challenge by alternative hypothesis, filtering by tests of increasing rigor, and examining results and patterns across the studies.

Alternative Hypotheses on Effectiveness

In the spirit of scientific rigor, the committee sought to consider rival hypotheses that could explain the data. Given the weaknesses in the designs generally, often these alternative hypotheses cannot be dismissed. However, we believed that only after examining the configuration of results and

alternative hypotheses can the next generation of evaluations be better informed and better designed. We began by generating alternative hypotheses to explain the positive directionality of the results in favor of experimental groups. Alternative hypotheses included the following:

The teachers in the experimental groups tended to be self-selecting early adopters, and thus able to achieve effects not likely in regular populations.

Changes in student outcomes reflect the effects of professional development instruction, or level of classroom support (in pilot sites), and thus inflate the predictions of effectiveness of curricular programs.

Hawthorne effect (Franke and Kaul, 1978) occurs when treatments are compared to everyday practices, due to motivational factors that influence experimental participants.

The consistent difference is due to the coherence and consistency of a single curricular program when compared to multiple programs.

The significance level is only achieved by the use of the wrong unit of analysis to test for significance.

Supplemental materials or new teaching techniques produce the results and not the experimental curricula.

Significant results reflect inadequate outcome measures that focus on a restricted set of activities.

The results are due to evaluator bias because too few evaluators are independent of the program developers.

At the same time, one could argue that the results actually underestimate the performance of these materials and are conservative measures, and their alternative hypotheses also deserve consideration:

Many standardized tests are not sensitive to these curricular approaches, and by eliminating studies focusing on affect, we eliminated a key indicator of the appeal of these curricula to students.

Poor implementation or increased demands on teachers’ knowledge dampens the effects.

Often in the experimental treatment, top-performing students are missing as they are advised to take traditional sequences, rendering the samples unequal.

Materials are not well aligned with universities and colleges because tests for placement and success in early courses focus extensively on algebraic manipulation.

Program implementation has been undercut by negative publicity and the fears of parents concerning change.

There are also a number of possible hypotheses that may be affecting the results in either direction, and we list a few of these:

Examining the role of the teacher in curricular decision making is an important element in effective implementation, and design mandates of evaluation design make this impossible (and the positives and negatives or single- versus dual-track curriculum as in Lundin, 2001).

Local tests that are sensitive to the curricular effects typically are not mandatory and hence may lead to unpredictable performance by students.

Different types and extent of professional development may affect outcomes differentially.

Persistence or attrition may affect the mean scores and are often not considered in the comparative analyses.

One could also generate reasons why the curricular programs produced results showing no significance when one program or the other is actually more effective. This could include high degrees of variability in the results, samples that used the correct unit of analysis but did not obtain consistent participation across enough cases, implementation that did not show enough fidelity to the measures, or outcome measures insensitive to the results. Again, subsequent designs should be better informed by these findings to improve the likelihood that they will produce less ambiguous results and replication of studies could also give more confidence in the findings.

It is beyond the scope of this report to consider each of these alternative hypotheses separately and to seek confirmation or refutation of them. However, in the next section, we describe a set of analyses carried out by the committee that permits us to examine and consider the impact of various critical evaluation design decisions on the patterns of outcomes across sets of studies. A number of analyses shed some light on various alternative hypotheses and may inform the conduct of future evaluations.

Filtering Studies by Critical Decision Points to Increase Rigor

In examining the comparative studies, we identified seven critical decision points that we believed would directly affect the rigor and efficacy of the study design. These decision points were used to create a set of 16 filters. These are listed as the following questions:

Was there a report on comparability relative to SES?

Was there a report on comparability of samples relative to prior knowledge?

Was there a report on treatment fidelity?

Was professional development reported on?

Was the comparative curriculum specified?

Was there any attempt to report on teacher effects?

Was a total test score reported?

Was total test score(s) disaggregated by content strand?

Did the outcome measures match the curriculum?

Were multiple tests used?

Was the appropriate unit of analysis used in their statistical tests?

Did they estimate effect size for the study?

Was the generalizability of their findings limited by use of a restricted range of ability levels?

Was the generalizability of their findings limited by use of pilot sites for their study?

Was the generalizability of their findings limited by not disaggregating their results by subgroup?

Was the generalizability of their findings limited by use of small sample size?

The studies were coded to indicate if they reported having addressed these considerations. In some cases, the decision points were coded dichotomously as present or absent in the studies, and in other cases, the decision points were coded trichotomously, as description presented, absent, or statistically adjusted for in the results. For example, a study may or may not report on the comparability of the samples in terms of race, ethnicity, or socioeconomic status. If a report on SES was given, the study was coded as “present” on this decision; if a report was missing, it was coded as “absent”; and if SES status or ethnicity was used in the analysis to actually adjust outcomes, it was coded as “adjusted for.” For each coding, the table that follows reports the number of studies that met that condition, and then reports on the mean percentage of statistically significant results, and results showing no significant difference for that set of studies. A significance test is run to see if the application of the filter produces changes in the probability that are significantly different. 5

In the cases in which studies are coded into three distinct categories—present, absent, and adjusted for—a second set of filters is applied. First, the studies coded as present or adjusted for are combined and compared to those coded as absent; this is what we refer to as a weak test of the rigor of the study. Second, the studies coded as present or absent are combined and compared to those coded as adjusted for. This is what we refer to as a strong test. For dichotomous codings, there can be as few as three compari-

  

The significance test used was a chi-square not corrected for discontinuity.

sons, and for trichotomous codings, there can be nine comparisons with accompanying tests of significance. Trichotomous codes were used for adjustments for SES and prior knowledge, examining treatment fidelity, professional development, teacher effects, and reports on effect sizes. All others were dichotomous.

NSF Studies and the Filters

For example, there were 11 studies of NSF-supported curricula that simply reported on the issues of SES in creating equivalent samples for comparison, and for this subset the mean probabilities of getting positive, negative, or results showing no significant difference were (.47, .10, .43). If no report of SES was supplied (n= 21), those probabilities become (.57, .07, .37), indicating an increase in positive results and a decrease in results showing no significant difference. When an adjustment is made in outcomes based on differences in SES (n=14), the probabilities change to (.72, .00, .28), showing a higher likelihood of positive outcomes. The probabilities that result from filtering should always be compared back to the overall results of (.59, .06, .35) (see Table 5-8 ) so as to permit one to judge the effects of more rigorous methodological constraints. This suggests that a simple report on SES without adjustment is least likely to produce positive outcomes; that is, no report produces the outcomes next most likely to be positive and studies that adjusted for SES tend to have a higher proportion of their comparisons producing positive results.

The second method of applying the filter (the weak test for rigor) for the treatment of the adjustment of SES groups compares the probabilities when a report is either given or adjusted for compared to when no report is offered. The combined percentage of a positive outcome of a study in which SES is reported or adjusted for is (.61, .05, .34), while the percentage for no report remains as reported previously at (.57, .07, .37). A final filter compares the probabilities of the studies in which SES is adjusted for with those that either report it only or do not report it at all. Here we compare the percentage of (.72, .00, .28) to (.53, .08, .37) in what we call a strong test. In each case we compared the probability produced by the whole group to those of the filtered studies and conducted a test of the differences to determine if they were significant. These differences were not significant. These findings indicate that to date, with this set of studies, there is no statistically significant difference in results when one reports or adjusts for changes in SES. It appears that by adjusting for SES, one sees increases in the positive results, and this result deserves a closer examination for its implications should it prove to hold up over larger sets of studies.

We ran tests that report the impact of the filters on the number of studies, the percentage of studies, and the effects described as probabilities

for each of the three study categories, NSF-supported and commercially generated with UCSMP included. We claim that when a pattern of probabilities of results does not change after filtering, one can have more confidence in that pattern. When the pattern of results changes, there is a need for an explanatory hypothesis, and that hypothesis can shed light on experimental design. We propose that this “filtering process” constitutes a test of the robustness of the outcome measures subjected to increasing degrees of rigor by using filtering.

Results of Filtering on Evaluations of NSF-Supported Curricula

For the NSF-supported curricular programs, out of 15 filters, 5 produced a probability that differed significantly at the p<.1 level. The five filters were for treatment fidelity, specification of control group, choosing the appropriate statistical unit, generalizability for ability, and generalizability based on disaggregation by subgroup. For each filter, there were from three to nine comparisons, as we examined how the probabilities of outcomes change as tests were more stringent and across the categories of positive results, negative results, and results with no significant differences. Out of a total of 72 possible tests, only 11 produced a probability that differed significantly at the p < .1 level. With 85 percent of the comparisons showing no significant difference after filtering, we suggest the results of the studies were relatively robust in relation to these tests. At the same time, when rigor is increased for the five filters just listed, the results become generally more ambiguous and signal the need for further research with more careful designs.

Studies of Commercial Materials and the Filters

To ensure enough studies to conduct the analysis (n=17), our filtering analysis of the commercially generated studies included UCSMP (n=8). In this case, there were six filters that produced a probability that differed significantly at the p < .1 level. These were treatment fidelity, disaggregation by content, use of multiple tests, use of effect size, generalizability by ability, and generalizability by sample size. In this case, because there were no studies in some possible categories, there were a total of 57 comparisons, and 9 displayed significant differences in the probabilities after filtering at the p < .1 level. With 84 percent of the comparisons showing no significant difference after filtering, we suggest the results of the studies were relatively robust in relation to these tests. Table 5-9 shows the cases in which significant differences were recorded.

Impact of Treatment Fidelity on Probabilities

A few of these differences are worthy of comment. In the cases of both the NSF-supported and commercially generated curricula evaluation studies, studies that reported treatment fidelity differed significantly from those that did not. In the case of the studies of NSF-supported curricula, it appeared that a report or adjustment on treatment fidelity led to proportions with less positive effects and more results showing no significant differences. We hypothesize that this is partly because larger studies often do not examine actual classroom practices, but can obtain significance more easily due to large sample sizes.

In the studies of commercial materials, the presence or absence of measures of treatment fidelity worked differently. Studies reporting on or adjusting for treatment fidelity tended to have significantly higher probabilities in favor of experimental treatment, less positive effects in fewer of the comparative treatments, and more likelihood of results with no significant differences. We hypothesize, and confirm with a separate analysis, that this is because UCSMP frequently reported on treatment fidelity in their designs while study of Saxon typically did not, and the change represents the preponderance of these different curricular treatments in the studies of commercially generated materials.

Impact of Identification of Curricular Program on Probabilities

The significant differences reported under specificity of curricular comparison also merit discussion for studies of NSF-supported curricula. When the comparison group is not specified, a higher percentage of mean scores in favor of the experimental curricula is reported. In the studies of commercial materials, a failure to name specific curricular comparisons also produced a higher percentage of positive outcomes for the treatment, but the difference was not statistically significant. This suggests the possibility that when a specified curriculum is compared to an unspecified curriculum, reports of impact may be inflated. This finding may suggest that in studies of effectiveness, specifying comparative treatments would provide more rigorous tests of experimental approaches.

When studies of commercial materials disaggregate their results of content strands or use multiple measures, their reports of positive outcomes increase, the negative outcomes decrease, and in one case, the results show no significant differences. Percentage of significant difference was only recorded in one comparison within each one of these filters.

TABLE 5-9 Cases of Significant Differences

Test

Type of Comparison

Category Code

N=

Probabilities Before Filter

p=

Treatment fidelity

Simple compare

Specified

21

.51, .02, .47*

*p =.049

 

Not specified

 

24

.68, .09, .23*

 

 

Adjusted for

 

1

.25, .00, .75

 

Treatment fidelity

Strong test

Adjusted for

22

.49*, .02, .49**

*p=.098

 

Reported or not specified

 

24

.68*, .09, .23**

**p=.019

Control group specified

Simple compare

Specified

8

.33*, .00, .66**

*p=.033

 

 

Not specified

38

.65*, .07, .29**

**p=.008

Appropriate unit of analysis

Simple compare

Correct

5

.30*, .40**, .30

*p=.069

 

 

Incorrect

41

.63*, .01**, .36

**p=.000

Generalizability by ability

Simple compare

Limited

5

.22*, .41**, .37

*p=.019

 

 

Not limited

41

.64*, .01**, .35

**p=.000

Generalizability by disaggregated subgroup

Simple compare

Limited

28

.48*, .09, .43**

*p=.013

 

Not limited

18

.76*, .00, .24**

**p=.085

Treatment fidelity

Simple compare

Reported

7

.53, .37*, .20

*p=.032

 

 

Not specified

9

.26, .67*, .11

 

 

 

Adjusted for

1

.45, .00*, .55

 

Treatment fidelity

Weak test

Adjusted for or

8

.52, .33, .25*

*p=.087

 

 

Reported versus

9

.26, .67, .11*

 

 

 

Not specified

 

 

 

Outcomes disaggregated by content strand

Simple compare

Reported

11

.50, .37, .22*

*p=.052

 

Not reported

6

.17, .77, .10*

 

Outcomes using multiple tests

Simple compare

Yes

9

.55*, .35, .19

*p=.076

 

 

No

8

.20*, .68, .20

 

Effect size reported

Simple compare

Yes

3

.72, .05, .29*

*p=.029

 

 

No

14

.31, .61, .16*

 

Generalization by ability

Simple compare

Limited

4

.23, .41*, .32

*p=.004

 

 

Not limited

14

.42, .53, .09

 

Generalization by sample size

Simple compare

Limited

6

.57, .23, .27*

*p=.036

 

 

Not limited

11

.28, .66, .10*

 

NOTE: In the comparisons shown, only the comparisons marked by an asterisk showed significant differences at p<.1. Probabilitie s are estimated for each significant difference.

Impact of Units of Analysis on Probabilities 6

For the evaluations of the NSF-supported materials, a significant difference was reported on the outcomes for the studies that used the correct unit of analysis compared to those that did not. The percentage for those with the correct unit were (.30, .40, .30) compared to (.63, .01, .36) for those that used the incorrect result. These results suggest that our prediction that using the correct unit of analysis would decrease the percentage of positive outcomes is likely to be correct. It also suggests that the most serious threat to the apparent conclusions of these studies comes from selecting an incorrect unit of analysis. It causes a decrease in favorable results, making the results more ambiguous, but never reverses the direction of the effect. This is a concern that merits major attention in the conduct of further studies.

For the commercially generated studies, most of the ones coded with the correct unit of analysis were UCSMP studies. Because of the small number of studies involved, we could not break out from the overall filtering of studies of commercial materials, but report this issue to assist readers in interpreting the relative patterns of results.

Impact of Generalizability on Probabilities

Both types of studies yielded significant differences for some of the comparisons coded as restrictions to generalizability. Investigating these is important in order to understand the effects of these curricular programs on different subpopulations of students. In the case of the studies of commercially generated materials, significantly different results occurred in the categories of ability and sample size. In the studies of NSF-supported materials, the significant differences occurred in ability and disaggregation by subgroups.

In relation to generalizability, the studies of NSF-supported curricula reported significantly more positive results in favor of the treatment when they included all students. Because studies coded as “limited by ability” were restricted either by focusing only on higher achieving students or on lower achieving students, we sorted these two groups. For higher performing students (n=3), the probabilities of effects were (.11, .67, .22). For lower

  

It should be noted that of the five studies in which the correct unit of analysis was used, two of these were population studies of freshmen entering college, and these reported few results in favor of the experimental treatments. However, the high proportion of these studies involving college students may skew this particular result relative to the preponderance of other studies involving K-12 students.

performing students (n=2), the probabilities were (.39, .025, .59). The first two comparisons are significantly different at p < .05. These findings are based on only a total of five studies, but they suggest that these programs may be serving the weaker ability students more effectively than the stronger ability students, serving both less well than they serve whole heterogeneous groups. For the studies of commercial materials, there were only three studies that were restricted to limited populations. The results for those three studies were (.23, .41, .32) and for all students (n=14) were (.42, .53, .09). These studies were significantly different at p = .004. All three studies included UCSMP and one also included Saxon and was limited by serving primarily high-performing students. This means both categories of programs are showing weaker results when used with high-ability students.

Finally, the studies on NSF-supported materials were disaggregated by subgroups for 28 studies. A complete analysis of this set follows, but the studies that did not report results disaggregated by subgroup generated probabilities of results of (.48, .09, .43) whereas those that did disaggregate their results reported (.76, 0, .24). These gains in positive effects came from significant losses in reporting no significant differences. Studies of commercial materials also reported a small decrease in likelihood of negative effects for the comparison program when disaggregation by subgroup is reported offset by increases in positive results and results with no significant differences, although these comparisons were not significantly different. A further analysis of this topic follows.

Overall, these results suggest that increased rigor seems to lead in general to less strong outcomes, but never reports of completely contrary results. These results also suggest that in recommending design considerations to evaluators, there should be careful attention to having evaluators include measures of treatment fidelity, considering the impact on all students as well as one particular subgroup; using the correct unit of analysis; and using multiple tests that are also disaggregated by content strand.

Further Analyses

We conducted four further analyses: (1) an analysis of the outcome probabilities by test type; (2) content strands analysis; (3) equity analysis; and (4) an analysis of the interactions of content and equity by grade band. Careful attention to the issues of content strand, equity, and interaction is essential for the advancement of curricular evaluation. Content strand analysis provides the detail that is often lost by reporting overall scores; equity analysis can provide essential information on what subgroups are adequately served by the innovations, and analysis by content and grade level can shed light on the controversies that evolve over time.

Analysis by Test Type

Different studies used varied combinations of outcome measures. Because of the importance of outcome measures on test results, we chose to examine whether the probabilities for the studies changed significantly across different types of outcome measures (national test, local test). The most frequent use of tests across all studies was a combination of national and local tests (n=18 studies), a local test (n=16), and national tests (n=17). Other uses of test combinations were used by three studies or less. The percentages of various outcomes by test type in comparison to all studies are described in Table 5-10 .

These data ( Table 5-11 ) suggest that national tests tend to produce less positive results, and with the resulting gains falling into results showing no significant differences, suggesting that national tests demonstrate less curricular sensitivity and specificity.

TABLE 5-10 Percentage of Outcomes by Test Type

Test Type

National/Local

Local Only

National Only

All Studies

All studies

(.48, .18, .34) n=18

(.63, .03, .34) n=16

(.31, .05, .64) n= 3

(.54, .07, .40) n=63

NOTE: The first set of numbers in the parenthesis represent the percentage of outcomes that are positive, the second set of numbers in the parenthesis represent the percentage of outcomes that are negative, and the third set of numbers represent the percentage of outcomes that are nonsignificant.

TABLE 5-11 Percentage of Outcomes by Test Type and Program Type

Test Type

National/Local

Local Only

National Only

All Studies

NSF effects

(.52, .15, .34) n=14

(.57, .03, .39) n=14

(.44, .00, .56) n=4

(.59, .06, .35) n=46

UCSMP effects

(.41, .18, .41) n=3

***

***

(.49, .09, .42) n=8

Commercial effects

**

**

(.29, .08, .63) n=8

(.29, .13, .59) n=9

NOTE: The first set of numbers in the parenthesis represent the percentage of outcomes that are positive, the second set of numbers represent the percentage of outcomes that are negative, and the third set of numbers represent the percentage of outcomes that are nonsignificant.

TABLE 5-12 Number of Studies That Disaggregated by Content Strand

Program Type

Elementary

Middle

High School

Total

NSF-supported

14

6

9

29

Commercially generated

0

4

5

9

Content Strand

Curricular effectiveness is not an all-or-nothing proposition. A curriculum may be effective in some topics and less effective in others. For this reason, it is useful for evaluators to include an analysis of curricular strands and to report on the performance of students on those strands. To examine this issue, we conducted an analysis of the studies that reported their results by content strand. Thirty-eight studies did this; the breakdown is shown in Table 5-12 by type of curricular program and grade band.

To examine the evaluations of these content strands, we began by listing all of the content strands reported across studies as well as the frequency of report by the number of studies at each grade band. These results are shown in Figure 5-11 , which is broken down by content strand, grade level, and program type.

Although there are numerous content strands, some of them were reported on infrequently. To allow the analysis to focus on the key results from these studies, we separated out the most frequently reported on strands, which we call the “major content strands.” We defined these as strands that were examined in at least 10 percent of the studies. The major content strands are marked with an asterisk in the Figure 5-11 . When we conduct analyses across curricular program types or grade levels, we use these to facilitate comparisons.

A second phase of our analysis was to examine the performance of students by content strand in the treatment group in comparison to the control groups. Our analysis was conducted across the major content strands at the level of NSF-supported versus commercially generated, initially by all studies and then by grade band. It appeared that such analysis permitted some patterns to emerge that might prove helpful to future evaluators in considering the overall effectiveness of each approach. To do this, we then coded the number of times any particular strand was measured across all studies that disaggregated by content strand. Then, we coded the proportion of times that this strand was reported as favoring the experimental treatment, favoring the comparative curricula, or showing no significant difference. These data are presented across the major content strands for the NSF-supported curricula ( Figure 5-12 ) and the commercially generated curricula, ( Figure 5-13 ) (except in the case of the elemen-

example of comparative study thesis title

FIGURE 5-11 Study counts for all content strands.

tary curricula where no data were available) in the forms of percentages, with the frequencies listed in the bars.

The presentation of results by strands must be accompanied by the same restrictions as stated previously. These results are based on studies identified as at least minimally methodologically adequate. The quality of the outcome measures in measuring the content strands has not been examined. Their results are coded in relation to the comparison group in the study and are indicated as statistically in favor of the program, as in favor of the comparative program, or as showing no significant differences. The results are combined across studies with no weighting by study size. Their results should be viewed as a means for the identification of topics for potential future study. It is completely possible that a refinement of methodologies may affect the future patterns of results, so the results are to be viewed as tentative and suggestive.

example of comparative study thesis title

FIGURE 5-12 Major content strand result: All NSF (n=27).

According to these tentative results, future evaluations should examine whether the NSF-supported programs produce sufficient competency among students in the areas of algebraic manipulation and computation. In computation, approximately 40 percent of the results were in favor of the treatment group, no significant differences were reported in approximately 50 percent of the results, and results in favor of the comparison were revealed 10 percent of the time. Interpreting that final proportion of no significant difference is essential. Some would argue that because computation has not been emphasized, findings of no significant differences are acceptable. Others would suggest that such findings indicate weakness, because the development of the materials and accompanying professional development yielded no significant difference in key areas.

example of comparative study thesis title

FIGURE 5-13 Major content strand result: All commercial (n=8).

From Figure 5-13 of findings from studies of commercially generated curricula, it appears that mixed results are commonly reported. Thus, in evaluations of commercial materials, lack of significant differences in computations/operations, word problems, and probability and statistics suggest that careful attention should be given to measuring these outcomes in future evaluations.

Overall, the grade band results for the NSF-supported programs—while consistent with the aggregated results—provide more detail. At the elementary level, evaluations of NSF-supported curricula (n=12) report better performance in mathematics concepts, geometry, and reasoning and problem solving, and some weaknesses in computation. No content strand analysis for commercially generated materials was possible. Evaluations

(n=6) at middle grades of NSF-supported curricula showed strength in measurement, geometry, and probability and statistics and some weaknesses in computation. In the studies of commercial materials, evaluations (n=4) reported favorable results in reasoning and problem solving and some unfavorable results in algebraic procedures, contextual problems, and mathematics concepts. Finally, at the high school level, the evaluations (n=9) by content strand for the NSF-supported curricula showed strong favorable results in algebra concepts, reasoning/problem solving, word problems, probability and statistics, and measurement. Results in favor of the control were reported in 25 percent of the algebra procedures and 33 percent of computation measures.

For the studies of commercial materials (n=4), only the geometry results favor the control group 25 percent of the time, with 50 percent having favorable results. Algebra concepts, reasoning, and probability and statistics also produced favorable results.

Equity Analysis of Comparative Studies

When the goal of providing a standards-based curriculum to all students was proposed, most people could recognize its merits: the replacement of dull, repetitive, largely dead-end courses with courses that would lead all students to be able, if desired and earned, to pursue careers in mathematics-reliant fields. It was clear that the NSF-supported projects, a stated goal of which was to provide standards-based courses to all students, called for curricula that would address the problem of too few students persisting in the study of mathematics. For example, as stated in the NSF Request for Proposals (RFP):

Rather than prematurely tracking students by curricular objectives, secondary school mathematics should provide for all students a common core of mainstream mathematics differentiated instructionally by level of abstraction and formalism, depth of treatment and pace (National Science Foundation, 1991, p. 1). In the elementary level solicitation, a similar statement on causes for all students was made (National Science Foundation, 1988, pp. 4-5).

Some, but not enough attention has been paid to the education of students who fall below the average of the class. On the other hand, because the above average students sometimes do not receive a demanding education, it may be incorrectly assumed they are easy to teach (National Science Foundation, 1989, p. 2).

Likewise, with increasing numbers of students in urban schools, and increased demographic diversity, the challenges of equity are equally significant for commercial publishers, who feel increasing pressures to demonstrate the effectiveness of their products in various contexts.

The problem was clearly identified: poorer performance by certain subgroups of students (minorities—non-Asian, LEP students, sometimes females) and a resulting lack of representation of such groups in mathematics-reliant fields. In addition, a secondary problem was acknowledged: Highly talented American students were not being provided adequate challenge and stimulation in comparison with their international counterparts. We relied on the concept of equity in examining the evaluation. Equity was contrasted to equality, where one assumed all students should be treated exactly the same (Secada et al., 1995). Equity was defined as providing opportunities and eliminating barriers so that the membership in a subgroup does not subject one to undue and systematically diminished possibility of success in pursuing mathematical study. Appropriate treatment therefore varies according to the needs of and obstacles facing any subgroup.

Applying the principles of equity to evaluate the progress of curricular programs is a conceptually thorny challenge. What is challenging is how to evaluate curricular programs on their progress toward equity in meeting the needs of a diverse student body. Consider how the following questions provide one with a variety of perspectives on the effectiveness of curricular reform regarding equity:

Does one expect all students to improve performance, thus raising the bar, but possibly not to decrease the gap between traditionally well-served and under-served students?

Does one focus on reducing the gap and devote less attention to overall gains, thus closing the gap but possibly not raising the bar?

Or, does one seek evidence that progress is made on both challenges—seeking progress for all students and arguably faster progress for those most at risk?

Evaluating each of the first two questions independently seems relatively straightforward. When one opts for a combination of these two, the potential for tensions between the two becomes more evident. For example, how can one differentiate between the case in which the gap is closed because talented students are being underchallenged from the case in which the gap is closed because the low-performing students improved their progress at an increased rate? Many believe that nearly all mathematics curricula in this country are insufficiently challenging and rigorous. Therefore achieving modest gains across all ability levels with evidence of accelerated progress by at-risk students may still be criticized for failure to stimulate the top performing student group adequately. Evaluating curricula with regard to this aspect therefore requires judgment and careful methodological attention.

Depending on one’s view of equity, different implications for the collection of data follow. These considerations made examination of the quality of the evaluations as they treated questions of equity challenging for the committee members. Hence we spell out our assumptions as precisely as possible:

Evaluation studies should include representative samples of student demographics, which may require particular attention to the inclusion of underrepresented minority students from lower socioeconomic groups, females, and special needs populations (LEP, learning disabled, gifted and talented students) in the samples. This may require one to solicit participation by particular schools or districts, rather than to follow the patterns of commercial implementation, which may lead to an unrepresentative sample in aggregate.

Analysis of results should always consider the impact of the program on the entire spectrum of the sample to determine whether the overall gains are distributed fairly among differing student groups, and not achieved as improvements in the mean(s) of an identifiable subpopulation(s) alone.

Analysis should examine whether any group of students is systematically less well served by curricular implementation, causing losses or weakening the rate of gains. For example, this could occur if one neglected the continued development of programs for gifted and talented students in mathematics in order to implement programs focused on improving access for underserved youth, or if one improved programs solely for one group of language learners, ignoring the needs of others, or if one’s study systematically failed to report high attrition affecting rates of participation of success or failure.

Analyses should examine whether gaps in scores between significantly disadvantaged or underperforming subgroups and advantaged subgroups are decreasing both in relation to eliminating the development of gaps in the first place and in relation to accelerating improvement for underserved youth relative to their advantaged peers at the upper grades.

In reviewing the outcomes of the studies, the committee reports first on what kinds of attention to these issues were apparent in the database, and second on what kinds of results were produced. Some of the studies used multiple methods to provide readers with information on these issues. In our report on the evaluations, we both provide descriptive information on the approaches used and summarize the results of those studies. Developing more effective methods to monitor the achievement of these objectives may need to go beyond what is reported in this study.

Among the 63 at least minimally methodologically adequate studies, 26 reported on the effects of their programs on subgroups of students. The

TABLE 5-13 Most Common Subgroups Used in the Analyses and the Number of Studies That Reported on That Variable

Identified Subgroup

Number of Studies of NSF-Supported

Number of Studies of Commercially Generated

Total

Gender

14

5

19

Race and ethnicity

14

2

16

Socioeconomic status

8

2

10

Achievement levels

5

3

8

English as a second language (ESL)

2

1

3

Total

43

13

56

Achievement levels: Outcome data are reported in relation to categorizations by quartiles or by achievement level based on independent test.

other 37 reported on the effects of the curricular intervention on means of whole groups and their standard deviations, but did not report on their data in terms of the impact on subpopulations. Of those 26 evaluations, 19 studies were on NSF-supported programs and 7 were on commercially generated materials. Table 5-13 reports the most common subgroups used in the analyses and the number of studies that reported on that variable. Because many studies used multiple categories for disaggregation (ethnicity, SES, and gender), the number of reports is more than double the number of studies. For this reason, we report the study results in terms of the “frequency of reports on a particular subgroup” and distinguish this from what we refer to as “study counts.” The advantage of this approach is that it permits reporting on studies that investigated multiple ways to disaggregate their data. The disadvantage is that in a sense, studies undertaking multiple disaggregations become overrepresented in the data set as a result. A similar distinction and approach were used in our treatment of disaggregation by content strands.

It is apparent from these data that the evaluators of NSF-supported curricula documented more equity-based outcomes, as they reported 43 of the 56 comparisons. However, the same percentage of the NSF-supported evaluations disaggregated their results by subgroup, as did commercially generated evaluations (41 percent in both cases). This is an area where evaluations of curricula could benefit greatly from standardization of ex-

pectation and methodology. Given the importance of the topic of equity, it should be standard practice to include such analyses in evaluation studies.

In summarizing these 26 studies, the first consideration was whether representative samples of students were evaluated. As we have learned from medical studies, if conclusions on effectiveness are drawn without careful attention to representativeness of the sample relative to the whole population, then the generalizations drawn from the results can be seriously flawed. In Chapter 2 we reported that across the studies, approximately 81 percent of the comparative studies and 73 percent of the case studies reported data on school location (urban, suburban, rural, or state/region), with suburban students being the largest percentage in both study types. The proportions of students studied indicated a tendency to undersample urban and rural populations and oversample suburban schools. With a high concentration of minorities and lower SES students in these areas, there are some concerns about the representativeness of the work.

A second consideration was to see whether the achievement effects of curricular interventions were achieved evenly among the various subgroups. Studies answered this question in different ways. Most commonly, evaluators reported on the performance of various subgroups in the treatment conditions as compared to those same subgroups in the comparative condition. They reported outcome scores or gains from pretest to posttest. We refer to these as “between” comparisons.

Other studies reported on the differences among subgroups within an experimental treatment, describing how well one group does in comparison with another group. Again, these reports were done in relation either to outcome measures or to gains from pretest to posttest. Often these reports contained a time element, reporting on how the internal achievement patterns changed over time as a curricular program was used. We refer to these as “within” comparisons.

Some studies reported both between and within comparisons. Others did not report findings by comparing mean scores or gains, but rather created regression equations that predicted the outcomes and examined whether demographic characteristics are related to performance. Six studies (all on NSF-supported curricula) used this approach with variables related to subpopulations. Twelve studies used ANCOVA or Multiple Analysis of Variance (MANOVA) to study disaggregation by subgroup, and two reported on comparative effect sizes. In the studies using statistical tests other than t-tests or Chi-squares, two were evaluations of commercially generated materials and the rest were of NSF-supported materials.

Of the studies that reported on gender (n=19), the NSF-supported ones (n=13) reported five cases in which the females outperformed their counterparts in the controls and one case in which the female-male gap decreased within the experimental treatments across grades. In most cases, the studies

present a mixed picture with some bright spots, with the majority showing no significant difference. One study reported significant improvements for African-American females.

In relation to race, 15 of 16 reports on African Americans showed positive effects in favor of the treatment group for NSF-supported curricula. Two studies reported decreases in the gaps between African Americans and whites or Asians. One of the two evaluations of African Americans, performance reported for the commercially generated materials, showed significant positive results, as mentioned previously.

For Hispanic students, 12 of 15 reports of the NSF-supported materials were significantly positive, with the other 3 showing no significant difference. One study reported a decrease in the gaps in favor of the experimental group. No evaluations of commercially generated materials were reported on Hispanic populations. Other reports on ethnic groups occurred too seldom to generalize.

Students from lower socioeconomic groups fared well, according to reported evaluations of NSF-supported materials (n=8), in that experimental groups outperformed control groups in all but one case. The one study of commercially generated materials that included SES as a variable reported no significant difference. For students with limited English proficiency, of the two evaluations of NSF-supported materials, one reported significantly more positive results for the experimental treatment. Likewise, one study of commercially generated materials yielded a positive result at the elementary level.

We also examined the data for ability differences and found reports by quartiles for a few evaluation studies. In these cases, the evaluations showed results across quartiles in favor of the NSF-supported materials. In one case using the same program, the lower quartiles showed the most improvement, and in the other, the gains were in the middle and upper groups for the Iowa Test of Basic Skills and evenly distributed for the informal assessment.

Summary Statements

After reviewing these studies, the committee observed that examining differences by gender, race, SES, and performance levels should be examined as a regular part of any review of effectiveness. We would recommend that all comparative studies report on both “between” and “within” comparisons so that the audience of an evaluation can simply and easily consider the level of improvement, its distribution across subgroups, and the impact of curricular implementation on any gaps in performance. Each of the major categories—gender, race/ethnicity, SES, and achievement level—contributes a significant and contrasting view of curricular impact. Further-

more, more sophisticated accounts would begin to permit, across studies, finer distinctions to emerge, such as the effect of a program on young African-American women or on first generation Asian students.

In addition, the committee encourages further study and deliberation on the use of more complex approaches to the examination of equity issues. This is particularly important due to the overlaps among these categories, where poverty can show itself as its own variable but also may be highly correlated to prior performance. Hence, the use of one variable can mask differences that should be more directly attributable to another. The committee recommends that a group of measurement and equity specialists confer on the most effective design to advance on these questions.

Finally, it is imperative that evaluation studies systematically include demographically representative student populations and distinguish evaluations that follow the commercial patterns of use from those that seek to establish effectiveness with a diverse student population. Along these lines, it is also important that studies report on the impact data on all substantial ethnic groups, including whites. Many studies, perhaps because whites were the majority population, failed to report on this ethnic group in their analyses. As we saw in one study, where Asian students were from poor homes and first generation, any subgroup can be an at-risk population in some setting, and because gains in means may not necessarily be assumed to translate to gains for all subgroups or necessarily for the majority subgroup. More complete and thorough descriptions and configurations of characteristics of the subgroups being served at any location—with careful attention to interactions—is needed in evaluations.

Interactions Among Content and Equity, by Grade Band

By examining disaggregation by content strand by grade levels, along with disaggregation by diverse subpopulations, the committee began to discover grade band patterns of performance that should be useful in the conduct of future evaluations. Examining each of these issues in isolation can mask some of the overall effects of curricular use. Two examples of such analysis are provided. The first example examines all the evaluations of NSF-supported curricula from the elementary level. The second examines the set of evaluations of NSF-supported curricula at the high school level, and cannot be carried out on evaluations of commercially generated programs because they lack disaggregation by student subgroup.

Example One

At the elementary level, the findings of the review of evaluations of data on effectiveness of NSF-supported curricula report consistent patterns of

benefits to students. Across the studies, it appears that positive results are enhanced when accompanied by adequate professional development and the use of pedagogical methods consistent with those indicated by the curricula. The benefits are most consistently evidenced in the broadening topics of geometry, measurement, probability, and statistics, and in applied problem solving and reasoning. It is important to consider whether the outcome measures in these areas demonstrate a depth of understanding. In early understanding of fractions and algebra, there is some evidence of improvement. Weaknesses are sometimes reported in the areas of computational skills, especially in the routinization of multiplication and division. These assertions are tentative due to the possible flaws in designs but quite consistent across studies, and future evaluations should seek to replicate, modify, or discredit these results.

The way to most efficiently and effectively link informal reasoning and formal algorithms and procedures is an open question. Further research is needed to determine how to most effectively link the gains and flexibility associated with student-generated reasoning to the automaticity and generalizability often associated with mastery of standard algorithms.

The data from these evaluations at the elementary level generally present credible evidence of increased success in engaging minority students and students in poverty based on reported gains that are modestly higher for these students than for the comparative groups. What is less well documented in the studies is the extent to which the curricula counteract the tendencies to see gaps emerge and result in long-term persistence in performance by gender and minority group membership as they move up the grades. However, the evaluations do indicate that these curricula can help, and almost never do harm. Finally, on the question of adequate challenge for advanced and talented students, the data are equivocal. More attention to this issue is needed.

Example Two

The data at the high school level produced the most conflicting results, and in conducting future evaluations, evaluators will need to examine this level more closely. We identify the high school as the crucible for curricular change for three reasons: (1) the transition to postsecondary education puts considerable pressure on these curricula; (2) the criteria outlined in the NSF RFP specify significant changes from traditional practice; and (3) high school freshmen arrive from a myriad of middle school curricular experiences. For the NSF-supported curricula, the RFP required that the programs provide a core curriculum “drawn from statistics/probability, algebra/functions, geometry/trigonometry, and discrete mathematics” (NSF, 1991, p. 2) and use “a full range of tools, including graphing calculators

and computers” (NSF, 1991, p. 2). The NSF RFP also specified the inclusion of “situations from the natural and social sciences and from other parts of the school curriculum as contexts for developing and using mathematics” (NSF, 1991, p. 1). It was during the fourth year that “course options should focus on special mathematical needs of individual students, accommodating not only the curricular demands of the college-bound but also specialized applications supportive of the workplace aspirations of employment-bound students” (NSF, 1991, p. 2). Because this set of requirements comprises a significant departure from conventional practice, the implementation of the high school curricula should be studied in particular detail.

We report on a Systemic Initiative for Montana Mathematics and Science (SIMMS) study by Souhrada (2001) and Brown et al. (1990), in which students were permitted to select traditional, reform, and mixed tracks. It became apparent that the students were quite aware of the choices they faced, as illustrated in the following quote:

The advantage of the traditional courses is that you learn—just math. It’s not applied. You get a lot of math. You may not know where to use it, but you learn a lot…. An advantage in SIMMS is that the kids in SIMMS tell me that they really understand the math. They understand where it comes from and where it is used.

This quote succinctly captures the tensions reported as experienced by students. It suggests that student perceptions are an important source of evidence in conducting evaluations. As we examined these curricular evaluations across the grades, we paid particular attention to the specificity of the outcome measures in relation to curricular objectives. Overall, a review of these studies would lead one to draw the following tentative summary conclusions:

There is some evidence of discontinuity in the articulation between high school and college, resulting from the organization and emphasis of the new curricula. This discontinuity can emerge in scores on college admission tests, placement tests, and first semester grades where nonreform students have shown some advantage on typical college achievement measures.

The most significant areas of disadvantage seem to be in students’ facility with algebraic manipulation, and with formalization, mathematical structure, and proof when isolated from context and denied technological supports. There is some evidence of weakness in computation and numeration, perhaps due to reliance on calculators and varied policies regarding their use at colleges (Kahan, 1999; Huntley et al., 2000).

There is also consistent evidence that the new curricula present

strengths in areas of solving applied problems, the use of technology, new areas of content development such as probability and statistics and functions-based reasoning in the use of graphs, using data in tables, and producing equations to describe situations (Huntley et al., 2000; Hirsch and Schoen, 2002).

Despite early performance on standard outcome measures at the high school level showing equivalent or better performance by reform students (Austin et al., 1997; Merlino and Wolff, 2001), the common standardized outcome measures (Preliminary Scholastic Assessment Test [PSAT] scores or national tests) are too imprecise to determine with more specificity the comparisons between the NSF-supported and comparison approaches, while program-generated measures lack evidence of external validity and objectivity. There is an urgent need for a set of measures that would provide detailed information on specific concepts and conceptual development over time and may require use as embedded as well as summative assessment tools to provide precise enough data on curricular effectiveness.

The data also report some progress in strengthening the performance of underrepresented groups in mathematics relative to their counterparts in the comparative programs (Schoen et al., 1998; Hirsch and Schoen, 2002).

This reported pattern of results should be viewed as very tentative, as there are only a few studies in each of these areas, and most do not adequately control for competing factors, such as the nature of the course received in college. Difficulties in the transition may also be the result of a lack of alignment of measures, especially as placement exams often emphasize algebraic proficiencies. These results are presented only for the purpose of stimulating further evaluation efforts. They further emphasize the need to be certain that such designs examine the level of mathematical reasoning of students, particularly in relation to their knowledge of understanding of the role of proofs and definitions and their facility with algebraic manipulation as we as carefully document the competencies taught in the curricular materials. In our framework, gauging the ease of transition to college study is an issue of examining curricular alignment with systemic factors, and needs to be considered along with those tests that demonstrate a curricular validity of measures. Furthermore, the results raising concerns about college success need replication before secure conclusions are drawn.

Also, it is important that subsequent evaluations also examine curricular effects on students’ interest in mathematics and willingness to persist in its study. Walker (1999) reported that there may be some systematic differences in these behaviors among different curricula and that interest and persistence may help students across a variety of subgroups to survive entry-level hurdles, especially if technical facility with symbol manipulation

can be improved. In the context of declines in advanced study in mathematics by American students (Hawkins, 2003), evaluation of curricular impact on students’ interest, beliefs, persistence, and success are needed.

The committee takes the position that ultimately the question of the impact of different curricula on performance at the collegiate level should be resolved by whether students are adequately prepared to pursue careers in mathematical sciences, broadly defined, and to reason quantitatively about societal and technological issues. It would be a mistake to focus evaluation efforts solely or primarily on performance on entry-level courses, which can clearly function as filters and may overly emphasize procedural competence, but do not necessarily represent what concepts and skills lead to excellence and success in the field.

These tentative patterns of findings indicate that at the high school level, it is necessary to conduct individual evaluations that examine the transition to college carefully in order to gauge the level of success in preparing students for college entry and the successful negotiation of majors. Equally, it is imperative to examine the impact of high school curricula on other possible student trajectories, such as obtaining high school diplomas, moving into worlds of work or through transitional programs leading to technical training, two-year colleges, and so on.

These two analyses of programs by grade-level band, content strand, and equity represent a methodological innovation that could strengthen the empirical database on curricula significantly and provide the level of detail really needed by curriculum designers to improve their programs. In addition, it appears that one could characterize the NSF programs (and not the commercial programs as a group) as representing a particular approach to curriculum, as discussed in Chapter 3 . It is an approach that integrates content strands; relies heavily on the use of situations, applications, and modeling; encourages the use of technology; and has a significant dose of mathematical inquiry. One could ask the question of whether this approach as a whole is “effective.” It is beyond the charge and scope of this report, but is a worthy target of investigation if one uses proper care in design, execution, and analysis. Likewise other approaches to curricular change should be investigated at the aggregate level, using careful and rigorous design.

The committee believes that a diversity of curricular approaches is a strength in an educational system that maintains local and state control of curricular decision making. While “scientifically established as effective” should be an increasingly important consideration in curricular choice, local cultural differences, needs, values, and goals will also properly influence curricular choice. A diverse set of effective curricula would be ideal. Finally, the committee emphasizes once again the importance of basing the studies on measures with established curricular validity and avoiding cor-

ruption of indicators as a result of inappropriate amounts of teaching to the test, so as to be certain that the outcomes are the product of genuine student learning.

CONCLUSIONS FROM THE COMPARATIVE STUDIES

In summary, the committee reviewed a total of 95 comparative studies. There were more NSF-supported program evaluations than commercial ones, and the commercial ones were primarily on Saxon or UCSMP materials. Of the 19 curricular programs reviewed, 23 percent of the NSF-supported and 33 percent of the commercially generated materials selected had programs with no comparative reviews. This finding is particularly disturbing in light of the legislative mandate in No Child Left Behind (U.S. Department of Education, 2001) for scientifically based curricular programs and materials to be used in the schools. It suggests that more explicit protocols for the conduct of evaluation of programs that include comparative studies need to be required and utilized.

Sixty-nine percent of NSF-supported and 61 percent of commercially generated program evaluations met basic conditions to be classified as at least minimally methodologically adequate studies for the evaluation of effectiveness. These studies were ones that met the criteria of including measures of student outcomes on mathematical achievement, reporting a method of establishing comparability among samples and reporting on implementation elements, disaggregating by content strand, or using precise, theoretical analyses of the construct or multiple measures.

Most of these studies had both strengths and weaknesses in their quasi-experimental designs. The committee reviewed the studies and found that evaluators had developed a number of features that merit inclusions in future work. At the same time, many had internal threats to validity that suggest a need for clearer guidelines for the conduct of comparative evaluations.

Many of the strengths and innovations came from the evaluators’ understanding of the program theories behind the curricula, their knowledge of the complexity of practice, and their commitment to measuring valid and significant mathematical ideas. Many of the weaknesses came from inadequate attention to experimental design, insufficient evidence of the independence of evaluators in some studies, and instability and lack of cooperation in interfacing with the conditions of everyday practice.

The committee identified 10 elements of comparative studies needed to establish a basis for determining the effectiveness of a curriculum. We recognize that not all studies will be able to implement successfully all elements, and those experimental design variations will be based largely on study size and location. The list of elements begins with the seven elements

corresponding to the seven critical decisions and adds three additional elements that emerged as a result of our review:

A better balance needs to be achieved between experimental and quasi-experimental studies. The virtual absence of large-scale experimental studies does not provide a way to determine whether the use of quasi-experimental approaches is being systematically biased in unseen ways.

If a quasi-experimental design is selected, it is necessary to establish comparability. When quasi-experimentation is used, it “pertains to studies in which the model to describe effects of secondary variables is not known but assumed” (NRC, 1992, p. 18). This will lead to weaker and potentially suspect causal claims, which should be acknowledged in the evaluation report, but may be necessary in relation to feasibility (Joint Committee on Standards for Educational Evaluation, 1994). In general, to date, studies have assumed prior achievement measures, ethnicity, gender, and SES, are acceptable variables on which to match samples or on which to make statistical adjustments. But there are often other variables in need of such control in such evaluations including opportunity to learn, teacher effectiveness, and implementation (see #4 below).

The selection of a unit of analysis is of critical importance to the design. To the extent possible, it is useful to randomly assign the unit for the different curricula. The number of units of analysis necessary for the study to establish statistical significance depends not on the number of students, but on this unit of analysis. It appears that classrooms and schools are the most likely units of analysis. In addition, the development of increasingly sophisticated means of conducting studies that recognize that the level of the educational system in which experimentation occurs affects research designs.

It is essential to examine the implementation components through a set of variables that include the extent to which the materials are implemented, teaching methods, the use of supplemental materials, professional development resources, teacher background variables, and teacher effects. Gathering these data to gauge the level of implementation fidelity is essential for evaluators to ensure adequate implementation. Studies could also include nested designs to support analysis of variation by implementation components.

Outcome data should include a variety of measures of the highest quality. These measures should vary by question type (open ended, multiple choice), by type of test (international, national, local) and by relation of testing to everyday practice (formative, summative, high stakes), and ensure curricular validity of measures and assess curricular alignment with systemic factors. The use of comparisons among total tests, fair tests, and

conservative tests, as done in the evaluations of UCSMP, permits one to gain insight into teacher effects and to contrast test results by items included. Tests should also include content strands to aid disaggregation, at a level of major content strands (see Figure 5-11 ) and content-specific items relevant to the experimental curricula.

Statistical analysis should be conducted on the appropriate unit of analysis and should include more sophisticated methods of analysis such as ANOVA, ANCOVA, MACOVA, linear regression, and multiple regression analysis as appropriate.

Reports should include clear statements of the limitations to generalization of the study. These should include indications of limitations in populations sampled, sample size, unique population inclusions or exclusions, and levels of use or attrition. Data should also be disaggregated by gender, race/ethnicity, SES, and performance levels to permit readers to see comparative gains across subgroups both between and within studies.

It is useful to report effect sizes. It is also useful to present item-level data across treatment program and show when performances between the two groups are within the 10 percent confidence interval of each other. These two extremes document how crucial it is for curricula developers to garner both precise and generalizable information to inform their revisions.

Careful attention should also be given to the selection of samples of populations for participation. These samples should be representative of the populations to whom one wants to generalize the results. Studies should be clear if they are generalizing to groups who have already selected the materials (prior users) or to populations who might be interested in using the materials (demographically representative).

The control group should use an identified comparative curriculum or curricula to avoid comparisons to unstructured instruction.

In addition to these prototypical decisions to be made in the conduct of comparative studies, the committee suggests that it would be ideal for future studies to consider some of the overall effects of these curricula and to test more directly and rigorously some of the findings and alternative hypotheses. Toward this end, the committee reported the tentative findings of these studies by program type. Although these results are subject to revision, based on the potential weaknesses in design of many of the studies summarized, the form of analysis demonstrated in this chapter provides clear guidance about the kinds of knowledge claims and the level of detail that we need to be able to judge effectiveness. Until we are able to achieve an array of comparative studies that provide valid and reliable information on these issues, we will be vulnerable to decision making based excessively on opinion, limited experience, and preconceptions.

This book reviews the evaluation research literature that has accumulated around 19 K-12 mathematics curricula and breaks new ground in framing an ambitious and rigorous approach to curriculum evaluation that has relevance beyond mathematics. The committee that produced this book consisted of mathematicians, mathematics educators, and methodologists who began with the following charge:

  • Evaluate the quality of the evaluations of the thirteen National Science Foundation (NSF)-supported and six commercially generated mathematics curriculum materials;
  • Determine whether the available data are sufficient for evaluating the efficacy of these materials, and if not;
  • Develop recommendations about the design of a project that could result in the generation of more reliable and valid data for evaluating such materials.

The committee collected, reviewed, and classified almost 700 studies, solicited expert testimony during two workshops, developed an evaluation framework, established dimensions/criteria for three methodologies (content analyses, comparative studies, and case studies), drew conclusions on the corpus of studies, and made recommendations for future research.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Utility Menu

University Logo

GA4 Tracking Code

Gen ed writes, writing across the disciplines at harvard college.

  • Comparative Analysis

What It Is and Why It's Useful

Comparative analysis asks writers to make an argument about the relationship between two or more texts. Beyond that, there's a lot of variation, but three overarching kinds of comparative analysis stand out:

  • Coordinate (A ↔ B): In this kind of analysis, two (or more) texts are being read against each other in terms of a shared element, e.g., a memoir and a novel, both by Jesmyn Ward; two sets of data for the same experiment; a few op-ed responses to the same event; two YA books written in Chicago in the 2000s; a film adaption of a play; etc. 
  • Subordinate (A  → B) or (B → A ): Using a theoretical text (as a "lens") to explain a case study or work of art (e.g., how Anthony Jack's The Privileged Poor can help explain divergent experiences among students at elite four-year private colleges who are coming from similar socio-economic backgrounds) or using a work of art or case study (i.e., as a "test" of) a theory's usefulness or limitations (e.g., using coverage of recent incidents of gun violence or legislation un the U.S. to confirm or question the currency of Carol Anderson's The Second ).
  • Hybrid [A  → (B ↔ C)] or [(B ↔ C) → A] , i.e., using coordinate and subordinate analysis together. For example, using Jack to compare or contrast the experiences of students at elite four-year institutions with students at state universities and/or community colleges; or looking at gun culture in other countries and/or other timeframes to contextualize or generalize Anderson's main points about the role of the Second Amendment in U.S. history.

"In the wild," these three kinds of comparative analysis represent increasingly complex—and scholarly—modes of comparison. Students can of course compare two poems in terms of imagery or two data sets in terms of methods, but in each case the analysis will eventually be richer if the students have had a chance to encounter other people's ideas about how imagery or methods work. At that point, we're getting into a hybrid kind of reading (or even into research essays), especially if we start introducing different approaches to imagery or methods that are themselves being compared along with a couple (or few) poems or data sets.

Why It's Useful

In the context of a particular course, each kind of comparative analysis has its place and can be a useful step up from single-source analysis. Intellectually, comparative analysis helps overcome the "n of 1" problem that can face single-source analysis. That is, a writer drawing broad conclusions about the influence of the Iranian New Wave based on one film is relying entirely—and almost certainly too much—on that film to support those findings. In the context of even just one more film, though, the analysis is suddenly more likely to arrive at one of the best features of any comparative approach: both films will be more richly experienced than they would have been in isolation, and the themes or questions in terms of which they're being explored (here the general question of the influence of the Iranian New Wave) will arrive at conclusions that are less at-risk of oversimplification.

For scholars working in comparative fields or through comparative approaches, these features of comparative analysis animate their work. To borrow from a stock example in Western epistemology, our concept of "green" isn't based on a single encounter with something we intuit or are told is "green." Not at all. Our concept of "green" is derived from a complex set of experiences of what others say is green or what's labeled green or what seems to be something that's neither blue nor yellow but kind of both, etc. Comparative analysis essays offer us the chance to engage with that process—even if only enough to help us see where a more in-depth exploration with a higher and/or more diverse "n" might lead—and in that sense, from the standpoint of the subject matter students are exploring through writing as well the complexity of the genre of writing they're using to explore it—comparative analysis forms a bridge of sorts between single-source analysis and research essays.

Typical learning objectives for single-sources essays: formulate analytical questions and an arguable thesis, establish stakes of an argument, summarize sources accurately, choose evidence effectively, analyze evidence effectively, define key terms, organize argument logically, acknowledge and respond to counterargument, cite sources properly, and present ideas in clear prose.

Common types of comparative analysis essays and related types: two works in the same genre, two works from the same period (but in different places or in different cultures), a work adapted into a different genre or medium, two theories treating the same topic; a theory and a case study or other object, etc.

How to Teach It: Framing + Practice

Framing multi-source writing assignments (comparative analysis, research essays, multi-modal projects) is likely to overlap a great deal with "Why It's Useful" (see above), because the range of reasons why we might use these kinds of writing in academic or non-academic settings is itself the reason why they so often appear later in courses. In many courses, they're the best vehicles for exploring the complex questions that arise once we've been introduced to the course's main themes, core content, leading protagonists, and central debates.

For comparative analysis in particular, it's helpful to frame assignment's process and how it will help students successfully navigate the challenges and pitfalls presented by the genre. Ideally, this will mean students have time to identify what each text seems to be doing, take note of apparent points of connection between different texts, and start to imagine how those points of connection (or the absence thereof)

  • complicates or upends their own expectations or assumptions about the texts
  • complicates or refutes the expectations or assumptions about the texts presented by a scholar
  • confirms and/or nuances expectations and assumptions they themselves hold or scholars have presented
  • presents entirely unforeseen ways of understanding the texts

—and all with implications for the texts themselves or for the axes along which the comparative analysis took place. If students know that this is where their ideas will be heading, they'll be ready to develop those ideas and engage with the challenges that comparative analysis presents in terms of structure (See "Tips" and "Common Pitfalls" below for more on these elements of framing).

Like single-source analyses, comparative essays have several moving parts, and giving students practice here means adapting the sample sequence laid out at the " Formative Writing Assignments " page. Three areas that have already been mentioned above are worth noting:

  • Gathering evidence : Depending on what your assignment is asking students to compare (or in terms of what), students will benefit greatly from structured opportunities to create inventories or data sets of the motifs, examples, trajectories, etc., shared (or not shared) by the texts they'll be comparing. See the sample exercises below for a basic example of what this might look like.
  • Why it Matters: Moving beyond "x is like y but also different" or even "x is more like y than we might think at first" is what moves an essay from being "compare/contrast" to being a comparative analysis . It's also a move that can be hard to make and that will often evolve over the course of an assignment. A great way to get feedback from students about where they're at on this front? Ask them to start considering early on why their argument "matters" to different kinds of imagined audiences (while they're just gathering evidence) and again as they develop their thesis and again as they're drafting their essays. ( Cover letters , for example, are a great place to ask writers to imagine how a reader might be affected by reading an their argument.)
  • Structure: Having two texts on stage at the same time can suddenly feel a lot more complicated for any writer who's used to having just one at a time. Giving students a sense of what the most common patterns (AAA / BBB, ABABAB, etc.) are likely to be can help them imagine, even if provisionally, how their argument might unfold over a series of pages. See "Tips" and "Common Pitfalls" below for more information on this front.

Sample Exercises and Links to Other Resources

  • Common Pitfalls
  • Advice on Timing
  • Try to keep students from thinking of a proposed thesis as a commitment. Instead, help them see it as more of a hypothesis that has emerged out of readings and discussion and analytical questions and that they'll now test through an experiment, namely, writing their essay. When students see writing as part of the process of inquiry—rather than just the result—and when that process is committed to acknowledging and adapting itself to evidence, it makes writing assignments more scientific, more ethical, and more authentic. 
  • Have students create an inventory of touch points between the two texts early in the process.
  • Ask students to make the case—early on and at points throughout the process—for the significance of the claim they're making about the relationship between the texts they're comparing.
  • For coordinate kinds of comparative analysis, a common pitfall is tied to thesis and evidence. Basically, it's a thesis that tells the reader that there are "similarities and differences" between two texts, without telling the reader why it matters that these two texts have or don't have these particular features in common. This kind of thesis is stuck at the level of description or positivism, and it's not uncommon when a writer is grappling with the complexity that can in fact accompany the "taking inventory" stage of comparative analysis. The solution is to make the "taking inventory" stage part of the process of the assignment. When this stage comes before students have formulated a thesis, that formulation is then able to emerge out of a comparative data set, rather than the data set emerging in terms of their thesis (which can lead to confirmation bias, or frequency illusion, or—just for the sake of streamlining the process of gathering evidence—cherry picking). 
  • For subordinate kinds of comparative analysis , a common pitfall is tied to how much weight is given to each source. Having students apply a theory (in a "lens" essay) or weigh the pros and cons of a theory against case studies (in a "test a theory") essay can be a great way to help them explore the assumptions, implications, and real-world usefulness of theoretical approaches. The pitfall of these approaches is that they can quickly lead to the same biases we saw here above. Making sure that students know they should engage with counterevidence and counterargument, and that "lens" / "test a theory" approaches often balance each other out in any real-world application of theory is a good way to get out in front of this pitfall.
  • For any kind of comparative analysis, a common pitfall is structure. Every comparative analysis asks writers to move back and forth between texts, and that can pose a number of challenges, including: what pattern the back and forth should follow and how to use transitions and other signposting to make sure readers can follow the overarching argument as the back and forth is taking place. Here's some advice from an experienced writing instructor to students about how to think about these considerations:

a quick note on STRUCTURE

     Most of us have encountered the question of whether to adopt what we might term the “A→A→A→B→B→B” structure or the “A→B→A→B→A→B” structure.  Do we make all of our points about text A before moving on to text B?  Or do we go back and forth between A and B as the essay proceeds?  As always, the answers to our questions about structure depend on our goals in the essay as a whole.  In a “similarities in spite of differences” essay, for instance, readers will need to encounter the differences between A and B before we offer them the similarities (A d →B d →A s →B s ).  If, rather than subordinating differences to similarities you are subordinating text A to text B (using A as a point of comparison that reveals B’s originality, say), you may be well served by the “A→A→A→B→B→B” structure.  

     Ultimately, you need to ask yourself how many “A→B” moves you have in you.  Is each one identical?  If so, you may wish to make the transition from A to B only once (“A→A→A→B→B→B”), because if each “A→B” move is identical, the “A→B→A→B→A→B” structure will appear to involve nothing more than directionless oscillation and repetition.  If each is increasingly complex, however—if each AB pair yields a new and progressively more complex idea about your subject—you may be well served by the “A→B→A→B→A→B” structure, because in this case it will be visible to readers as a progressively developing argument.

As we discussed in "Advice on Timing" at the page on single-source analysis, that timeline itself roughly follows the "Sample Sequence of Formative Assignments for a 'Typical' Essay" outlined under " Formative Writing Assignments, " and it spans about 5–6 steps or 2–4 weeks. 

Comparative analysis assignments have a lot of the same DNA as single-source essays, but they potentially bring more reading into play and ask students to engage in more complicated acts of analysis and synthesis during the drafting stages. With that in mind, closer to 4 weeks is probably a good baseline for many single-source analysis assignments. For sections that meet once per week, the timeline will either probably need to expand—ideally—a little past the 4-week side of things, or some of the steps will need to be combined or done asynchronously.

What It Can Build Up To

Comparative analyses can build up to other kinds of writing in a number of ways. For example:

  • They can build toward other kinds of comparative analysis, e.g., student can be asked to choose an additional source to complicate their conclusions from a previous analysis, or they can be asked to revisit an analysis using a different axis of comparison, such as race instead of class. (These approaches are akin to moving from a coordinate or subordinate analysis to more of a hybrid approach.)
  • They can scaffold up to research essays, which in many instances are an extension of a "hybrid comparative analysis."
  • Like single-source analysis, in a course where students will take a "deep dive" into a source or topic for their capstone, they can allow students to "try on" a theoretical approach or genre or time period to see if it's indeed something they want to research more fully.
  • DIY Guides for Analytical Writing Assignments

For Teaching Fellows & Teaching Assistants

  • Types of Assignments
  • Unpacking the Elements of Writing Prompts
  • Formative Writing Assignments
  • Single-Source Analysis
  • Research Essays
  • Multi-Modal or Creative Projects
  • Giving Feedback to Students

Assignment Decoder

Georgetown University.

College of Arts & Sciences

Georgetown University.

Sample Thesis Topics

Majors wishing to consult recent theses may contact the Program Director for copies.

  • Harrison Rose, “An English translation and critical introduction to Gabriele D’Annunzio’s 1901 verse drama  Francesca da Rimini”  (2020)
  • Isabelle Groenewegen, “The Symbiotic Relationship between Realist Literature and Photojournalism and their Role in Bringing Dignity to the Ordinary Lives of the Modern Era” (2020).
  • Manuela Tobias, “Dictatorial Testimonies: Structure, Sign and Politics in Ricardo Piglia’s  Respiración artificial  and Junot Diaz’s  The Brief Wondrous Life of Oscar Wao” (2017).
  • Eunyoung Kim, “Don’t Choose Life: How  Trainspotting, Arcadia , and  Madre e hijo  Share Postmodern Elements Despite Their Specific Local Contexts” (2017).
  • Michelle Klein, “Trauma in Haiti: Violence, Silence, and Spirituality in the Works of Yanick Lahens, Edwidge Danticat, and Marie Vieux-Chauvet” (2016).
  • Echo Weng, “Return to the Realm of Ambiguity: Uncanny and Supernatural Beliefs in  The Temple of the Golden Pavilion” (2016).
  • Rachel Kawasaki, “Tragedy and Comedy as Opposites and Complements in Shakespeare and Molière” (2015).
  • Marie-Camille Negrin, “Imagism and Surrealism: The Literary Avant Garde Challenges Tradition with Technique” (2012).
  • Lisa Oberst, “Oscar Wilde’s use of Ekphrasis in The Picture of Dorian Gray ; A Struggle Against Literal Boundaries to Achieve Artistic Supremacy over Nature” (2012).
  • William Tamplin, “Means of Deception: Iskandari’s Tricks in the Maqamat of Badi‘ al-Zaman al-Hamadhani ” (2012)
  • Adam Díaz, “Lo real maravilloso: Baroque Representations of Latin America in Junot Díaz’s The Brief Wondrous Life of Oscar Wao and Alejo Carpenter’s El reino de este mundo ” (2011).
  • Rebecca Gessler, “From Wandering to Writing: Jewish Literature in Mexico and Argentina” (2011).
  • Eleanor Warnock, “Collective Autobiography in Hayashi Fumiko’s Hōrōki and Annie Ernaux’s La Place ” (2011).
  • Anna Melyakova, “Writing Without Borders, Rediscovering Creative Identity after Soviet Dissolution and German Re-unification: Chingiz Aitmatov and Christa Wolf” (2008).
  • Leonora Stevens, “Representations of Subalternity and Indigenous Identity In the Films of Jorge Sanjines and Prose of Jose Maria Arguedas” (2007).
  • Kate Bohinc, “Representations of Totalitarianism: Victor Serge’s L’affaire Toulaev and George Orwell’s 1984 ” (2007).
  • Oxana Miliaeva, “The Problem of Subject Recognition in Alexander Bely’s Peterburg as Conditioned by an E.T.A. Hoffmann Intertext” (2007).
  • Sara M. Lewis, “Express Yourself: The Literary Search for Identity Under Fascist Regimes” (2006).
  • Cora Weissbourd, “ That Within Which Passes Show”: The Threat of Individual Consciousness in Hamlet and Phèdre ” (2006).
  • Shauna Maher, “ Oulipo and Oplepo: Potential Literature in France and Italy” (2006).
  • Eléonore Paule Veillet, “Transculturation in Latin America and North Africa: The Repercussions of Colonization in Carpentier, Fanon, and Yacine” (2005).
  • Anne Popolizio, “Immigrant Narratives and the Education of Integration: When I Was Puerto Rican and Le Gone du Chaaba ” (2005).
  • Ashley Bishop Ahearn, “Time and Character in Virginia Woolf and Marguerite Duras” (2005).
  • Leah Price, “The Naked Masks Come Out to Play: Contradiction of Technique in Questa Sera si Recita a Soggetto” (2005).
  • Katrine Lvovskaya, “The Eternal Outcast. Queer Directions from Wilde to Haynes” (2004).
  • Glen Goodman, “Residences or Residencias: Refraction, Domestication, and Foreignization in English-Language Translations of Pablo Neruda” (2004).
  • Tom Genova, “Redefining ‘You’: Messianism, the Other, and the Irrational in Les Fleurs du Mal and La casa de Bernarda Alba ” (2004).
  • Kim Gravette, “Domesticity and Creative Self-Expression: Like Water for Chocolate: A Novel in Monthly Installments with Recipes, Romances, and Home Remedies by Laura Esquivel and The Kitchen God’s Wife by Amy Tan ” (2003).
  • Christina Montero, “Discovering the Epic in Early 20th Century Modernism: T.S. Eliot and Pablo Neruda, a Comparative Journey of Poetic Purpose within The Waste Land and Canto General ” (2003).
  • Caroline Von Althann, “Film Adaptations of Jane Austen’s Novels” (2003).
  • Briana Komar, “A Marxist Reading of Le Père Goriot by Honoré de Balzac and Great Expectations by Charles Dickens” (2003).

helpful professor logo

5 Compare and Contrast Essay Examples (Full Text)

5 Compare and Contrast Essay Examples (Full Text)

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

A compare and contrast essay selects two or more items that are critically analyzed to demonstrate their differences and similarities. Here is a template for you that provides the general structure:

compare and contrast essay format

A range of example essays is presented below.

Compare and Contrast Essay Examples

#1 jean piaget vs lev vygotsky essay.

1480 Words | 5 Pages | 10 References

(Level: University Undergraduate)

paget vs vygotsky essay

Thesis Statement: “This essay will critically examine and compare the developmental theories of Jean Piaget and Lev Vygotsky, focusing on their differing views on cognitive development in children and their influence on educational psychology, through an exploration of key concepts such as the role of culture and environment, scaffolding, equilibration, and their overall implications for educational practices..”

#2 Democracy vs Authoritarianism Essay

democracy vs authoritarianism essay

Thesis Statement: “The thesis of this analysis is that, despite the efficiency and control offered by authoritarian regimes, democratic systems, with their emphasis on individual freedoms, participatory governance, and social welfare, present a more balanced and ethically sound approach to governance, better aligned with the ideals of a just and progressive society.”

#3 Apples vs Oranges Essay

1190 Words | 5 Pages | 0 References

(Level: 4th Grade, 5th Grade, 6th Grade)

apples vs oranges essay

Thesis Statement: “While apples and oranges are both popular and nutritious fruits, they differ significantly in their taste profiles, nutritional benefits, cultural symbolism, and culinary applications.”

#4 Nature vs Nurture Essay

1525 Words | 5 Pages | 11 References

(Level: High School and College)

nature vs nurture essay

Thesis Statement: “The purpose of this essay is to examine and elucidate the complex and interconnected roles of genetic inheritance (nature) and environmental influences (nurture) in shaping human development across various domains such as physical traits, personality, behavior, intelligence, and abilities.”

#5 Dogs vs Cats Essay

1095 Words | 5 Pages | 7 Bibliographic Sources

(Level: 6th Grade, 7th Grade, 8th Grade)

Thesis Statement: “This essay explores the distinctive characteristics, emotional connections, and lifestyle considerations associated with owning dogs and cats, aiming to illuminate the unique joys and benefits each pet brings to their human companions.”

How to Write a Compare and Contrast Essay

I’ve recorded a full video for you on how to write a compare and contrast essay:

Get the Compare and Contrast Templates with AI Prompts Here

In the video, I outline the steps to writing your essay. Here they are explained below:

1. Essay Planning

First, I recommend using my compare and contrast worksheet, which acts like a Venn Diagram, walking you through the steps of comparing the similarities and differences of the concepts or items you’re comparing.

I recommend selecting 3-5 features that can be compared, as shown in the worksheet:

compare and contrast worksheet

Grab the Worksheet as Part of the Compare and Contrast Essay Writing Pack

2. Writing the Essay

Once you’ve completed the worksheet, you’re ready to start writing. Go systematically through each feature you are comparing and discuss the similarities and differences, then make an evaluative statement after showing your depth of knowledge:

compare and contrast essay template

Get the Rest of the Premium Compare and Contrast Essay Writing Pack (With AI Prompts) Here

How to Write a Compare and Contrast Thesis Statement

Compare and contrast thesis statements can either:

  • Remain neutral in an expository tone.
  • Prosecute an argument about which of the items you’re comparing is overall best.

To write an argumentative thesis statement for a compare and contrast essay, try this AI Prompts:

💡 AI Prompt to Generate Ideas I am writing a compare and contrast essay that compares [Concept 1] and [Concept2]. Give me 5 potential single-sentence thesis statements that pass a reasonable judgement.

Ready to Write your Essay?

compare and contrast essay pack promotional image

Take action! Choose one of the following options to start writing your compare and contrast essay now:

Read Next: Process Essay Examples

compare and contrast examples and definition

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

IMAGES

  1. Comparative Essay

    example of comparative study thesis title

  2. Comparative Research

    example of comparative study thesis title

  3. FREE 9+ Comparative Research Templates in PDF

    example of comparative study thesis title

  4. Comparative Essay

    example of comparative study thesis title

  5. Comparative Essay

    example of comparative study thesis title

  6. Comparative Essay

    example of comparative study thesis title

VIDEO

  1. Degree of comparison|Positive comparative superlative degree |Positive degree|english grammar

  2. FEASIBILITY STUDY (thesis) BSBA major in marketing management

  3. COMMON/POSSIBLE QUESTIONS IN THESIS TITLE DEFENSE (WITH ANSWERS AND TIPS)

  4. PhD Thesis Defense. Vadim Sotskov

  5. The Importance of Proper Citation: Lessons from a Thesis

  6. Comparative Advantage #comparativeadvantage #mimtechnovate

COMMENTS

  1. Comparative Research

    Comparative Research Example. Title: A Comparative Study of Education Systems in the United States and Finland. 1. Introduction. This study compares the education systems of the United States and Finland to identify key differences and similarities in structure, teaching methods, and student outcomes.

  2. PDF How to Write a Comparative Analysis

    To write a good compare-and-contrast paper, you must take your raw data—the similarities and differences you've observed —and make them cohere into a meaningful argument. Here are the five elements required. Frame of Reference. This is the context within which you place the two things you plan to compare and contrast; it is the umbrella ...

  3. Craft the Perfect Title for Your Thesis Paper, With 10 Examples

    Take a look at these 10 thesis title examples from Columbia University: How Sovereign Credit Rating Changes Impact Private Investment (2022) Exploring Key Predictors of Subsequent IPO Performance in the United States Between 2016-2021 (2022) Overeducation: The Effects of the Great Recession on the Labor Market (2021)

  4. PDF A Causal Comparative Study on The Effect of Proficiency-based ...

    A CAUSAL COMPARATIVE STUDY ON THE EFFECT OF PROFICIENCY-BASED EDUCATION ON SCHOOL CLIMATE by Kay B. York A Dissertation Presented in Partial Fulfillment Of the Requirements for the Degree Doctor of Education Liberty University, Lynchburg, VA 2017 APPROVED BY: Michelle J. Barthlow, Ed.D., Committee Chair Vance Pickard, Ed.D., Committee Member

  5. Thesis Title: Examples and Suggestions from a PhD Grad

    Master's thesis title examples. Creation of an autonomous impulse response measurement system for rooms and transducers with different methods. Guy-Bart Stan, 2000 - Bioengineering - Imperial Professor - direct link to Guy-Bart's bioengineering academic CV. Segmentation of Nerve Bundles and Ganglia in Spine MRI using Particle Filters.

  6. Honors Theses

    Honors Theses - Examples. 1. A Carne e a Navalha: Self-Reflective Representation of Marginalized Characters in Brazilian Narrative by Clarice Lispector, Eduardo Coutinho, and Racionias MCs by Corina Ahlswede, 2018. 2. The Travel of Clear Waters: A Case Study on the Afterlife of a Poem by Kaiyu Xu, 2019. 3.

  7. Comparative Literature Theses and Dissertations

    Sickness of the Spirit: A Comparative Study of Lu Xun and James Joyce, Liang Meng. PDF. Dryden and the Solution to Domination: Bonds of Love In the Conquest of Granada, Lydia FitzSimons Robins. Theses/Dissertations from 2010 PDF. The Family As the New Collectivity of Belonging In the Fiction of Bharati Mukherjee and Jhumpa Lahiri, Sarbani Bose. PDF

  8. How to Write a Title for a Compare and Contrast Essay

    2. List what you want to compare. An informative title should tell your reader exactly what you are comparing in your essay. List the subjects you want to compare so that you can make sure they are included in your title. You only need to include the broad topics or themes you want to compare, such as dogs and cats.

  9. (PDF) A Short Introduction to Comparative Research

    According to Shahrokh and Miri (2019) in comparative studies, researchers compare cases to each other, construct quantitative comparisons using statistical methods, and evaluate covariation by ...

  10. Comparative Analysis Dissertation Guide: Useful Tips on Writing a

    Goals. It should be clear to the reader why you want to compare two particular things. That's why, before you start making your dissertation comparative analysis, you'll need to explain your goal. For example, the goal of a dissertation in human science can be to describe and classify something. Modes of Comparison.

  11. Comparative Essay

    For example, you can compare two different novels (e.g., The Adventures of Huckleberry Finn and The Red Badge of Courage). However, a comparative essay is not limited to specific topics. It covers almost every topic or subject with some relation. Comparative Essay Structure. A good comparative essay is based on how well you structure your essay.

  12. 15

    Summary. In contrast to the chapters on survey research, experimentation, or content analysis that described a distinct set of skills, in this chapter, a variety of comparative research techniques are discussed. What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data.

  13. The Comparative Essay

    A comparative essay asks that you compare at least two (possibly more) items. These items will differ depending on the assignment. You might be asked to compare. positions on an issue (e.g., responses to midwifery in Canada and the United States) theories (e.g., capitalism and communism) figures (e.g., GDP in the United States and Britain)

  14. Recent Dissertations in Comparative Literature

    Recent Dissertations in Comparative Literature. Dissertations in Comparative Literature have taken on vast number of topics and ranged across various languages, literatures, historical periods and theoretical perspectives. The department seeks to help each student craft a unique project and find the resources across the university to support ...

  15. PDF COMPARATIVE RESEARCH

    Comparative Studies is committed to interdisciplinary and cross-cultural inquiry and exchange. Their research and teaching focus on the rigorous comparative study of human experiences and ground our engagement with issues of social justice. Comparative Studies students are encouraged to develop their critical and analytical skills and to become

  16. 5 Comparative Studies

    In nearly all studies in the comparative group, the titles of experimental curricula were explicitly identified. The only exception to this was the ARC Implementation Center study (Sconiers et al., 2002), where three NSF-supported elementary curricula were examined, but in the results, their effects were pooled. ... For example, a study with ...

  17. Comparative Analysis

    Comparative analysis asks writers to make an argument about the relationship between two or more texts. Beyond that, there's a lot of variation, but three overarching kinds of comparative analysis stand out: Subordinate (A → B) or (B → A): Using a theoretical text (as a "lens") to explain a case study or work of art (e.g., how Anthony Jack ...

  18. 13 Compare and Contrast Thesis Examples to Inspire You

    With these points in mind, let's take a look at 13 compare and contrast thesis statement examples to get you started with your essay. I've included a broad topic for each thesis statement and divided the lists into general comparisons and literary comparisons. I've also linked each of the topics to a related example essay for extra ...

  19. Sample Thesis Topics

    Sample Thesis Topics. Majors wishing to consult recent theses may contact the Program Director for copies. Isabelle Groenewegen, "The Symbiotic Relationship between Realist Literature and Photojournalism and their Role in Bringing Dignity to the Ordinary Lives of the Modern Era" (2020). Manuela Tobias, "Dictatorial Testimonies: Structure ...

  20. 5 Compare and Contrast Essay Examples (Full Text)

    Here they are explained below: 1. Essay Planning. First, I recommend using my compare and contrast worksheet, which acts like a Venn Diagram, walking you through the steps of comparing the similarities and differences of the concepts or items you're comparing. I recommend selecting 3-5 features that can be compared, as shown in the worksheet: