COUPON: Rent You May Ask Yourself An Introduction to Thinking Like a Sociologist 4th edition (749). https://renewgenesis808.weebly.com/blog/springsteen-eric-church-free-mp3-download. Get FREE 7-day instant eTextbook access! How to download picasa web albums. Yourself an introduction to thinking like a sociologist chapter summaries, you may ask yourself an introduction to thinking like a sociologist online, you may ask yourself an introduction to thinking like a sociologist chapter 1, you may ask yourself an introduction to thinking like a sociologist pdf download Recent eBooks: her-sweetest-revenge. Pcsx2 forum scarlet crush ps3 controller driver.
Author : Dalton Conley
ISBN : 1324001003 Genre : File Size : 35.82 MB Format : PDF, ePub Download : 285 Read : 1107 You May Ask Yourself 4th Edition Pdf free. download full
THE BASICS OF SOCIAL RESEARCH
A Note from the Author Writing is my joy, sociology my passion. I delight in putting words together in a way that makes people learn or laugh or both. Sociology is one way I can do just that. It represents our last, best hope for planettraining our race and finding ways for us to live together. I feel a special excitement at being present when sociology, at last, comes into focus as an idea whose time has come. I grew up in small-town Vermont and New Hampshire. When I announced I wanted to be an auto-body mechanic, like my dad, my teacher told me I should go to college instead. When Malcolm X announced he wanted to be a lawyer, his teacher told him a colored boy should be something more like a carpenter. The difference in our experiences says something powerful about the idea of a level playing field. The inequalities among ethnic groups run deep. I ventured into the outer world by way of Harvard, the USMC, U.C. Berkeley, and 12 years teaching at the University of Hawaii. Along the way, I married Sheila two months after our first date, and we created Aaron three years after that: two of my wisest acts. I resigned from teaching in 1980 and wrote full-time for seven years, until the call of the classroom became too loud to ignore. For me, teaching is like playing jazz. Even if you perform the same number over and over, it never comes out the same twice, and you don’t know exactly what it’ll sound like until you hear it. Teaching is like writing with your voice. At last, I have matured enough to rediscover and appreciate my roots in Vermont each summer. Rather than a return to the past, it feels more like the next turn in a widening spiral. I can’t wait to see what’s around the next bend. THE BASICS OF SOCIAL RESEARCH Fourth Edition Earl Babbie Chapman University Australia • Brazil • Canada • Mexico • Singapore • Spain United Kingdom • United States The Basics of Social Research, Fourth Edition Earl Babbie Acquisitions Editor: Chris Caldeira Development Editor: Sherry Symington Assistant Editor: Christina Ho Editorial Assistant: Tali Beesley Technology Project Manager: Dave Lionetti Marketing Manager: Michelle Williams Marketing Assistant: Jaren Boland Marketing Communications Manager: Linda Yip Project Manager, Editorial Production: Matt Ballantyne Creative Director: Rob Hugel Art Director: John Walker Print Buyer: Nora Massuda Permissions Editor: Roberta Broyer Production Service: Greg Hubit Bookworks Copy Editor: Molly D. Roth Illustrator: Lotus Art Cover Designer: Yvo Riezebos Cover Image: © Chad Ehlers/Alamo Compositor: Newgen Text and Cover Printer: Courier—Westford © 2008, 2005 Thomson Wadsworth, a part of The Thomson Corporation. Thomson, the Star logo, and Wadsworth are trademarks used herein under license. Thomson Higher Education 10 Davis Drive Belmont, CA 94002-3098 USA ALL RIGHTS RESERVED. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, web distribution, information storage and retrieval systems, or in any other manner—without the written permission of the publisher. Printed in the United States of America 1 2 3 4 5 6 7 11 10 09 08 07 For more information about our products, contact us at: Thomson Learning Academic Resource Center 1-800-423-0563 For permission to use material from this text or product, submit a request online at http://www.thomsonrights.com. Any additional questions about permissions can be submitted by e-mail to [email protected] Library of Congress Control Number: 2006938498 Student Edition: ISBN-13: 978-0-495-09468-5 ISBN-10: 0-495-09468-4 International Student Edition: ISBN: 0-495-10233-4 (Not for sale in the United States) Dedication Evelyn Fay Babbie Henry Robert Babbie This page intentionally left blank CONTENTS IN BRIEF Part One AN INTRODUCTION TO INQUIRY 1 Human Inquiry and Science 3 2 Paradigms, Theory, and Research 32 3 The Ethics and Politics of Social Research 64 Part Four 1 ANALYSIS OF DATA: QUANTITATIVE AND QUALITATIVE 412 13 Qualitative Data Analysis 414 14 Quantitative Data Analysis 442 15 Reading and Writing Social Research 470 Part Two THE STRUCTURING OF INQUIRY: QUANTITATIVE AND QUALITATIVE 92 Appendixes A Using the Library 498 B Random Numbers 506 4 Research Design 94 C Distribution of Chi Square 508 5 Conceptualization, Operationalization, and Measurement 130 D Normal Curve Areas 510 E Estimated Sampling Error 511 6 Indexes, Scales, and Typologies 168 7 The Logic of Sampling 198 Part Three MODES OF OBSERVATION: QUANTITATIVE AND QUALITATIVE 242 8 Experiments 244 9 Survey Research 268 10 Qualitative Field Research 312 11 Unobtrusive Research 348 12 Evaluation Research 382 vii This page intentionally left blank CONTENTS Preface xvii Review Questions 30 Online Study Resources 30 Additional Readings 30 Part One AN INTRODUCTION TO INQUIRY Chapter 2 Paradigms, Theory, and Research 32 Chapter 1 Human Inquiry and Science 1 3 What Do You Think? 4 Introduction 4 Looking for Reality 6 Ordinary Human Inquiry 6 Tradition 7 Authority 8 Errors in Inquiry and Some Solutions 8 What’s Really Real? 10 The Foundations of Social Science 12 Theory, Not Philosophy or Belief 13 Social Regularities 13 Aggregates, Not Individuals 15 A Variable Language 16 The Purposes of Social Research 21 The Ethics of Human Inquiry 22 Some Dialectics of Social Research 22 Idiographic and Nomothetic Explanation 22 Inductive and Deductive Theory 23 Quantitative and Qualitative Data 25 Pure and Applied Research 27 What Do You Think? Revisited 28 Main Points 29 Key Terms 30 What Do You Think? 33 Introduction 33 Some Social Science Paradigms 34 Macrotheory and Microtheory 36 Early Positivism 36 Conflict Paradigm 37 Symbolic Interactionism 37 Ethnomethodology 38 Structural Functionalism 39 Feminist Paradigms 40 Critical Race Theory 42 Rational Objectivity Reconsidered 42 Two Logical Systems Revisited 45 The Traditional Model of Science 45 Deduction and Induction Compared 48 Deductive Theory Construction 54 Getting Started 54 Constructing Your Theory 54 An Example of Deductive Theory: Distributive Justice 54 Inductive Theory Construction 56 An Example of Inductive Theory: Why Do People Smoke Marijuana? 57 The Links between Theory and Research 58 The Importance of Theory in the “Real World” 59 ix x CONTENTS Research Ethics and Theory 60 What Do You Think? Revisited 60 Main Points 61 Key Terms 62 Review Questions 62 Online Study Resources 62 Additional Readings 62 Chapter 3 The Ethics and Politics of Social Research 64 What Do You Think? 65 Introduction 65 Ethical Issues in Social Research 66 Voluntary Participation 67 No Harm to the Participants 68 Anonymity and Confidentiality 69 Deception 72 Analysis and Reporting 73 Institutional Review Boards 74 Professional Codes of Ethics 77 Two Ethical Controversies 79 Trouble in the Tearoom 79 Observing Human Obedience 80 The Politics of Social Research 81 Objectivity and Ideology 82 Politics with a Little “p” 86 Politics in Perspective 87 What Do You Think? Revisited 88 Main Points 88 Key Terms 89 Review Questions 89 Online Study Resources 90 Additional Readings 91 Part Two THE STRUCTURING OF INQUIRY: QUANTITATIVE AND QUALITATIVE 92 Chapter 4 Research Design What Do You Think? 95 Introduction 96 94 Three Purposes of Research 97 Exploration 97 Description 99 Explanation 99 The Logic of Nomothetic Explanation 99 Criteria for Nomothetic Causality 100 Nomothetic Causal Analysis and Hypothesis Testing 101 False Criteria for Nomothetic Causality 102 Necessary and Sufficient Causes 102 Units of Analysis 104 Individuals 105 Groups 106 Organizations 106 Social Interactions 108 Social Artifacts 108 Units of Analysis in Review 109 Faulty Reasoning about Units of Analysis: The Ecological Fallacy and Reductionism 109 The Time Dimension 111 Cross-Sectional Studies 111 Longitudinal Studies 112 Approximating Longitudinal Studies 115 Examples of Research Strategies 117 How to Design a Research Project 117 Getting Started 119 Conceptualization 120 Choice of Research Method 120 Operationalization 120 Population and Sampling 121 Observations 121 Data Processing 121 Analysis 122 Application 122 Research Design in Review 122 The Research Proposal 123 Elements of a Research Proposal 124 The Ethics of Research Design 125 What Do You Think? Revisited 125 Main Points 126 Key Terms 127 Review Questions 127 Online Study Resources 128 Additional Readings 128 CONTENTS Chapter 5 Chapter 6 Conceptualization, Operationalization, and Measurement 130 Indexes, Scales, and Typologies 168 What Do You Think? 131 Introduction 131 Measuring Anything That Exists 132 Conceptions, Concepts, and Reality 133 Conceptions as Constructs 134 Conceptualization 136 Indicators and Dimensions 136 The Interchangeability of Indicators 139 Real, Nominal, and Operational Definitions 139 Creating Conceptual Order 140 An Example of Conceptualization: The Concept of Anomie 142 Definitions in Descriptive and Explanatory Studies 145 Operationalization Choices 147 Range of Variation 147 Variations between the Extremes 148 A Note on Dimensions 148 Defining Variables and Attributes 149 Levels of Measurement 149 Single or Multiple Indicators 154 Some Illustrations of Operationalization Choices 154 Operationalization Goes On and On 155 Criteria of Measurement Quality 156 Precision and Accuracy 156 Reliability 157 Validity 160 Who Decides What’s Valid? 162 Tension between Reliability and Validity 163 What Do You Think? Revisited 163 The Ethics of Measurement 164 Main Points 164 Key Terms 165 Review Questions 165 Online Study Resources 166 Additional Readings 166 What Do You Think? 169 Introduction 169 Indexes versus Scales 170 Index Construction 173 Item Selection 173 Examination of Empirical Relationships 174 Index Scoring 179 Handling Missing Data 180 Index Validation 182 The Status of Women: An Illustration of Index Construction 185 Scale Construction 186 Bogardus Social Distance Scale 186 Thurstone Scales 187 Likert Scaling 188 Semantic Differential 189 Guttman Scaling 190 Typologies 193 What Do You Think? Revisited 194 Main Points 195 Key Terms 196 Review Questions 196 Online Study Resources 196 Additional Readings 196 Chapter 7 The Logic of Sampling 198 What Do You Think? 199 Introduction 199 A Brief History of Sampling 200 President Alf Landon 201 President Thomas E. Dewey 202 Two Types of Sampling Methods 203 Nonprobability Sampling 203 Reliance on Available Subjects 203 Purposive or Judgmental Sampling 204 Snowball Sampling 205 Quota Sampling 205 Selecting Informants 206 The Theory and Logic of Probability Sampling 207 xi xii CONTENTS Conscious and Unconscious Sampling Bias 208 Representativeness and Probability of Selection 210 Random Selection 211 Probability Theory, Sampling Distributions, and Estimates of Sample Error 212 Populations and Sampling Frames 221 Types of Sampling Designs 223 Simple Random Sampling 224 Systematic Sampling 224 Stratified Sampling 227 Implicit Stratification in Systematic Sampling 230 Illustration: Sampling University Students 230 Sample Modification 230 Multistage Cluster Sampling 231 Multistage Designs and Sampling Error 232 Stratification in Multistage Cluster Sampling 234 Probability Proportionate to Size (PPS) Sampling 235 Disproportionate Sampling and Weighting 236 Probability Sampling in Review 238 The Ethics of Sampling 238 What Do You Think? Revisited 238 Main Points 239 Key Terms 240 Review Questions 240 Online Study Resources 240 Additional Readings 241 Part Three MODES OF OBSERVATION: QUANTITATIVE AND QUALITATIVE 242 Chapter 8 Experiments 244 What Do You Think? 244 Introduction 244 Topics Appropriate to Experiments 246 The Classical Experiment 246 Independent and Dependent Variables 247 Pretesting and Posttesting 247 Experimental and Control Groups 248 The Double-Blind Experiment 249 Selecting Subjects 250 Probability Sampling 250 Randomization 251 Matching 251 Matching or Randomization? 252 Variations on Experimental Design 253 Preexperimental Research Designs 253 Validity Issues in Experimental Research 254 An Illustration of Experimentation 259 Web-Based Experiments 262 “Natural” Experiments 263 Strengths and Weaknesses of the Experimental Method 264 Ethics and Experiments 265 What Do You Think? Revisited 265 Main Points 266 Key Terms 267 Review Questions 267 Online Study Resources 267 Additional Readings 267 Chapter 9 Survey Research 268 What Do You Think? 269 Introduction 269 Topics Appropriate for Survey Research 270 Guidelines for Asking Questions 271 Choose Appropriate Question Forms 272 Make Items Clear 273 Avoid Double-Barreled Questions 273 Respondents Must Be Competent to Answer 274 Respondents Must Be Willing to Answer 274 Questions Should Be Relevant 274 Short Items Are Best 276 Avoid Negative Items 276 Avoid Biased Items and Terms 277 Questionnaire Construction 278 General Questionnaire Format 278 CONTENTS Formats for Respondents 278 Contingency Questions 279 Matrix Questions 280 Ordering Items in a Questionnaire 281 Questionnaire Instructions 282 Pretesting the Questionnaire 283 A Sample Questionnaire 283 Self-Administered Questionnaires 286 Mail Distribution and Return 286 Monitoring Returns 287 Follow-up Mailings 288 Acceptable Response Rates 288 A Case Study 289 Interview Surveys 291 The Role of the Survey Interviewer 291 General Guidelines for Survey Interviewing 292 Coordination and Control 294 Telephone Surveys 295 Positive and Negative Factors 295 Computer-Assisted Telephone Interviewing (CATI) 297 Response Rates in Interview Surveys 297 New Technologies and Survey Research 299 Comparison of the Different Survey Methods 302 Strengths and Weaknesses of Survey Research 303 Secondary Analysis 304 Ethics and Survey Research 307 What Do You Think? Revisited 307 Main Points 308 Key Terms 309 Review Questions 309 Online Study Resources 310 Additional Readings 310 Chapter 10 Qualitative Field Research 312 What Do You Think? 313 Introduction 313 Topics Appropriate to Field Research 314 Special Considerations in Qualitative Field Research 317 xiii The Various Roles of the Observer 317 Relations to Subjects 319 Some Qualitative Field Research Paradigms 321 Naturalism 321 Ethnomethodology 322 Grounded Theory 324 Case Studies and the Extended Case Method 326 Institutional Ethnography 328 Participatory Action Research 329 Conducting Qualitative Field Research 333 Preparing for the Field 333 Qualitative Interviewing 335 Focus Groups 338 Recording Observations 340 Strengths and Weaknesses of Qualitative Field Research 342 Validity 343 Reliability 344 What Do You Think? Revisited 344 Ethics in Qualitative Field Research 345 Main Points 345 Key Terms 346 Review Questions 346 Online Study Resources 346 Additional Readings 347 Chapter 11 Unobtrusive Research 348 What Do You Think? 349 Introduction 349 Content Analysis 350 Topics Appropriate to Content Analysis 350 Sampling in Content Analysis 352 Coding in Content Analysis 355 Illustrations of Content Analysis 359 Strengths and Weaknesses of Content Analysis 361 Analyzing Existing Statistics 362 Durkheim’s Study of Suicide 362 The Consequences of Globalization 364 Units of Analysis 365 Problems of Validity 365 xiv CONTENTS Problems of Reliability 366 Sources of Existing Statistics 366 Comparative and Historical Research 369 Examples of Comparative and Historical Research 369 Sources of Comparative and Historical Data 374 Analytical Techniques 376 Ethics and Unobtrusive Measures 378 What Do You Think? Revisited 378 Main Points 379 Key Terms 379 Review Questions 380 Online Study Resources 380 Additional Readings 380 Chapter 12 Evaluation Research 382 What Do You Think? 383 Introduction 383 Topics Appropriate to Evaluation Research 385 Formulating the Problem: Issues of Measurement 386 Specifying Outcomes 387 Measuring Experimental Contexts 388 Specifying Interventions 388 Specifying the Population 389 New versus Existing Measures 389 Operationalizing Success/Failure 389 Types of Evaluation Research Designs 390 Experimental Designs 390 Quasi-Experimental Designs 391 Qualitative Evaluations 395 Logistical Problems 397 Use of Research Results 400 Social Indicators Research 406 The Death Penalty and Deterrence 406 Computer Simulation 407 Ethics and Evaluation Research 408 What Do You Think? Revisited 409 Main Points 410 Key Terms 410 Review Questions 410 Online Study Resources 411 Additional Readings 411 Part Four ANALYSIS OF DATA: QUANTITATIVE AND QUALITATIVE 412 Chapter 13 Qualitative Data Analysis 414 What Do You Think? 415 Introduction 415 Linking Theory and Analysis 416 Discovering Patterns 416 Grounded Theory Method 417 Semiotics 419 Conversation Analysis 421 Qualitative Data Processing 421 Coding 422 Memoing 426 Concept Mapping 427 Computer Programs for Qualitative Data 428 QDA Programs 428 Leviticus as Seen through NUD*IST 429 Using NVivo to Understand Women Film Directors, by Sandrine Zerbib 433 The Qualitative Analysis of Quantitative Data 438 Ethics and Qualitative Data Analysis 438 What Do You Think? Revisited 439 Main Points 439 Key Terms 440 Review Questions 440 Online Study Resources 441 Additional Readings 441 Chapter 14 Quantitative Data Analysis What Do You Think? 443 Introduction 443 Quantification of Data 444 Developing Code Categories 445 Codebook Construction 447 Data Entry 448 Univariate Analysis 448 Distributions 449 442 CONTENTS Central Tendency 450 Dispersion 453 Continuous and Discrete Variables Detail versus Manageability 454 Subgroup Comparisons 455 “Collapsing” Response Categories Handling Don’t Knows 457 Numerical Descriptions in Qualitative Research 458 Bivariate Analysis 459 Percentaging a Table 460 Constructing and Reading Bivariate Tables 462 Introduction to Multivariate Analysis Sociological Diagnostics 464 Ethics and Quantitative Data Analysis What Do You Think? Revisited 466 Main Points 467 Key Terms 468 Review Questions 468 Online Study Resources 469 Additional Readings 469 454 456 463 Evaluation of Research Reports 474 Using the Internet Wisely 478 Writing Social Research 486 Some Basic Considerations 486 Organization of the Report 488 Guidelines for Reporting Analyses 491 Going Public 492 What Do You Think? Revisited 492 The Ethics of Reading and Writing Social Research 493 Main Points 493 Key Terms 494 Review Questions and Exercises 494 Online Study Resources 495 Additional Readings 495 466 Chapter 15 Reading and Writing Social Research 470 What Do You Think? 471 Introduction 471 Reading Social Research 471 Organizing a Review of the Literature 471 Journals versus Books 472 Appendixes A Using the Library 498 B Random Numbers 506 C Distribution of Chi Square 508 D Normal Curve Areas 510 E Estimated Sampling Error 511 Glossary 513 References 525 Index 535 xv This page intentionally left blank PREFACE The book in your hands has been about four decades in the making. It began in the classroom, when I was asked to teach a seminar in survey research. Frustrated with the lack of good textbooks on the subject, I began to dream up something I called “A Survey Research Cookbook and Other Fables,” which was published in 1973 with a more sober title: Survey Research Methods. The book was an immediate success. However, there were few courses limited to survey research. Several instructors around the country asked if “the same guy” could write a more general methods book, and The Practice of Social Research appeared two years later. The latter book has become a fixture in social research instruction, with the 11th edition published in 2006. The official Chinese edition was published in Beijing in 2000. Over the life of this first book, successive revisions have been based in large part on suggestions, comments, requests, and corrections from my colleagues around the country and, increasingly, around the world. Many also requested a shorter book with a more applied orientation. Whereas the third quarter of the twentieth century saw a greater emphasis on quantitative, pure research, the century ended with a renaissance of concern for applied sociological research (sometimes called sociological practice) and also a renewed interest in qualitative research. The Basics of Social Research was first published in 1999 in support of these trends. The fourth edition aims at increasing and improving that support. The book can also be seen as a response to changes in teaching methods and in student demographics. In addition to the emphasis on ap- plied research, some alternative teaching formats have called for a shorter book, and student economics have argued for a paperback. While standard methods courses have continued using The Practice of Social Research, I’ve been delighted to see that the first three editions of Basics seem to have satisfied a substantial group of instructors as well. The fine-tuning in this fourth edition is intended to help Basics serve this group even better than before. CHANGES IN THE FOURTH EDITION A revision like this depends heavily on the input from students and faculty, who have been using earlier editions. Some of those suggestions resulted in two new features that have been added across all chapters: • • Increased ethics coverage: Although Chapter 3, The Ethics and Politics of Social Research, deals with research ethics in depth, each of the other chapters has a section discussing some of the ethical issues involved in specific aspects of research. Increased coverage of qualitative research: Besides the new material on qualitative research featured in Chapter 10, Qualitative Field Research, and Chapter 13, Qualitative Data Analysis, additional qualitative discussions have been highlighted where appropriate, such as the discussion of the uses of qualitative and quantitative methods in the study of terrorism in Chapter 4, the new section on response xvii xviii • • • • PREFACE rates in interview surveys in Chapter 9, and the example of qualitative content analysis of gangsta rap and nihilism in Chapter 11. A series of “In the Real World” boxes suggests ways in which the topics of a chapter apply within real research settings and also how they may be useful to students outside the realm of research—in their real lives. Each of the chapters begins with a “What Do You Think?” box intended to present students with a puzzle that they will be able to resolve as a result of reading the chapter. In this edition, I’ve added some photos to accompany each of these boxes. I hope you will find some of them amusing. Many of the figures and diagrams in the book have been redrawn for both increased pedagogical value and visual appeal. “Issues and Insights” boxes throughout now showcase fascinating insights by researchers and more in-depth discussion of key issues discussed in the chapters. Chapter Changes In addition to those bookwide changes, here are some of the additional changes you’ll find in specific chapters of the book. Many of these changes were made in response to comments and requests from students and faculty. Part One: An Introduction to Inquiry 1. Human Inquiry and Science Data in the birthrate illustration have been updated to strengthen this illustration of social phenomena as distinct from individual phenomena. • Examples have been generally increased and updated. • New table on anti-gay prejudice and education. • More applied/activist examples to expand that theme of this book. • New subsection introducing the purposes of social research: exploration, descrip• • tion, explanation—plus pure versus applied research. Box on Ross Koppel’s research on medication errors, near the beginning of the chapter, to show the relevance of social research. 2. Paradigms, Theory, and Research New section on critical race theory. • Box on the power of political and religious paradigms. • 3. The Ethics and Politics of Social Research Discussion of the debate over “politicization of science.” • Discussion of how sloppiness in research is an ethical violation. • Introduction of the concept of participatory action research. • New section: “Politics and the Census.” • Part Two: The Structuring of Inquiry 4. Research Design • New section: “Nomothetic Causal Analysis and Hypothesis Testing.” • Expanded discussion of the literature review in the design of a study. • Discussion of the uses of qualitative and quantitative methods in the study of terrorism. • Example of a cohort analysis overturning the conclusions of a more simplistic analysis. • A new figure gives a graphic portrayal of the cohort study design. 5. Conceptualization, Operationalization, and Measurement • The chapter now begins with an example of measuring college satisfaction, which I expand on later in a box. • Discussion of Clifford Geertz’s “thick description.” • Added British example of “fear of crime” index. PREFACE • • • Example of Inuit words for snow to illustrate ambiguity in concepts and words. Box on the importance of conceptualization in political debates. Box applying conceptualization and operationalization to college satisfaction. 6. Indexes, Scales, and Typologies Discussion of how minority group members view items in a Bogardus Social Distance Scale. • Example of indicators being independent of each other. • Example of typology involving racial hegemony and colonialism. • 7. The Logic of Sampling Updated presidential election polling data now includes 2004 election. • Discussion of “cell phone only” problem. • New example of snowball sampling in Australia. • Updated, more sophisticated example of sampling error in the mass media. • A note that sampling error can be calculated for several measures, not just percentages. • Example of the multistage sampling of cities in Iran. • Discussion of controversy over weighting in political polls. • Part Three: Modes of Observation 8. Experiments • Chapter recast in terms of its pedagogical value. • Introduction of the term field experiment. 9. Survey Research • Discussion of politicians’ reactions to political polls. • New section on response rates in interview surveys. • Discussion of ways to improve response rates. • • xix Expanded discussion of online surveys. Comment on secondary analysis of qualitative data. 10. Qualitative Field Research Introduction of the terms emic perspective and etic perspective. • Introduction of the terms “virtual ethnography” and “autoethnography.” • Updated Strauss and Corbin guidelines for grounded theory. • Comment/quotation of challenge of control in PAR • New glossary term: emancipatory research. • Example of focus groups constituting indepth interviews. • Example using the Internet to identify subjects for in-depth interviews. • New box: “Pencils and Photos in the Hands of Research Subjects.” • 11. Unobtrusive Research Example of study on the consequences of globalization. • Historical research on the rise of Christianity. • Qualitative content analysis of gangsta rap and nihilism. • 12. Evaluation Research New glossary term: program evaluation/ outcome assessment. • New section on the Sabido methodology. • Example of monitoring studies in environmental research. • Part Four: Analysis of Data 13. Qualitative Data Analysis • Expanded discussion of forms of coding. • New section on computer programs for qualitative data analysis. 14. Quantitative Data Analysis • Updated data on religious attendance. • Updated data on marijuana legalization, age, and political orientations. xx PREFACE • • • Updated data on religious attendance, age, and sex. Mention of GapMinder software online. Updated data on education, gender, and income. 15. Reading and Writing Social Research New section on organizing a review of the literature. • New section on evaluating content analysis. • New section on evaluating comparative and historical research. • Updated and improved illustrations of web searches. • Introduction of SourceWatch to help students judge trustworthiness of web sources. • New section, “Going Public,” about student presentations/publications. • • • Pedagogical Features Although students and instructors both have told me that the past editions of this book were effective tools for learning research methods, I have seen this edition as an opportunity to review the book from a pedagogical standpoint—fine-tuning some elements, adding others. Here’s the resulting package for the fourth edition. • • • • • • Chapter Overview: Each chapter is preceded by a pithy focus paragraph that highlights the principal content of the chapter. Chapter Introduction: Each chapter opens with an introduction that lays out the main ideas in that chapter and, importantly, relates them to the content of other chapters in the book. Clear and provocative examples: Students often tell me that the examples—real and hypothetical—have helped them grasp difficult and/or abstract ideas, and this edition has many new examples as well as some that have proven particularly valuable in earlier editions. Graphics: From the first time I took a course in research methods, most of the key concepts • • have made sense to me in graphical form. Whereas my task here has been to translate those mental pictures into words, I’ve also included some illustrations. Advances in computer graphics have helped me communicate to the Wadsworth artists what I see in my head and would like to share with students. I’m delighted with the new graphics in this edition. Boxed examples and discussions: Students tell me they like the boxed materials that highlight particular ideas and studies as well as vary the format of the book. In this edition, I’ve added “Issues and Insights” boxes to explore key topics, and “In the Real World” boxes to help students see how the ideas they’re reading about apply to real research projects, as well as to their lives. Running Glossary: There is a running glossary throughout the text. Key terms are highlighted in the text, and the definition for each term is listed at the bottom of the page where it appears. This will help students learn the definitions of these terms and locate them in each chapter to review them in context. Main Points: At the end of each chapter, a concise list of main points provides both a brief chapter summary and a useful review. The main points let students know exactly what ideas they should focus on in each chapter. Key Terms: A list of key terms follows the main points. These lists reinforce the students’ acquisition of necessary vocabulary. The new vocabulary in these lists is defined in context in the chapters. The terms are boldfaced in the text, defined in the running glossary that appears at the bottom of the page throughout the text, and included in the glossary at the back of the book. Review Questions and Exercises: This review aid allows students to test their understanding of the chapter concepts and apply what they have learned. Resources on the Internet: As I mentioned earlier, each chapter ends with this new section. This edition continues previous editions’ movement into cyberspace. PREFACE • • • Additional Readings: In this section, I include an annotated list of references that students can turn to if they would like to learn more on the topics discussed in the chapter. Appendixes: As in previous editions, a set of appendixes provides students with some research tools, such as a guide to the library, a table of random numbers, and so forth. There is an SPSS primer on the book’s website along with primers for NVivo and Qualrus. Clear and accessible writing: This is perhaps the most important “pedagogical aid” of all. I know that all authors strive to write texts that are clear and accessible, and I take some pride in the fact that this “feature” of the book has been one of its most highly praised attributes through nine previous editions. It is the one thing students write most often about. For the fourth edition, the editors and I have taken special care to reexamine literally every line in the book—pruning, polishing, embellishing, and occasionally restructuring for a maximally “reader-friendly” text. Whether you’re new to this book or intimately familiar with previous editions, I invite you to open to any chapter and evaluate the writing for yourself. SUPPLEMENTS The Basics of Social Research, Fourth Edition, is accompanied by a wide array of supplements prepared for both the instructor and student to create the best learning environment inside as well as outside the classroom. All the continuing supplements have been thoroughly revised and updated, and several are new to this edition. I invite you to examine and take full advantage of the teaching and learning tools available to you. For the Student xxi teaching. Students tell me they use it heavily as a review of the text, and I count the exercises as half their grade in the course. In this edition, Ted and I have once again sorted through the exercises and added new ones we’ve created in our own teaching or heard about from colleagues. These include matching, multiplechoice, and open-ended discussion questions for each chapter, along with four to six exercises designed to reinforce the material learned in the text with examples from everyday life. Also included are the answers to the matching and multiplechoice review questions, as well as a General Social Survey appendix, plus chapter objectives, chapter summaries, and key terms. SPSS Student Version CD-ROM 14.0 (Windows only) Based on the professional version of one of the world’s leading desktop statistical software packages, SPSS Student Version for Windows provides real-world software for students to do sociological data analysis, such as interpreting the GSS data sets found on the companion website. Learning How to Use SPSS: with Exercises This handy guide is coordinated with the text and SPSS CD-ROM 14.0 to help students learn basic navigation in SPSS, including how to enter their own data; create, save, and retrieve files; produce and interpret data summaries; and much more. Also included are SPSS practice exercises correlated with each chapter. The guides comes free when bundled with the text. GSS Data Disk Over the years, we have sought to provide up-to-date personal computer support for students and instructors. Because there are now many excellent programs for analyzing data, we have provided data to be used with them. With this edition we have updated the data disk to include the GSS data. Guided Activities for Babbie’s The Basics of Social Research, Fourth Edition. The student study Experiencing Social Research: An Introduction Using MicroCase, Second Edition This supple- guide and workbook Ted Wagenaar and I have prepared continues to be a mainstay of my own mentary workbook and statistical package, written by David J. Ayers of Grove City College, includes xxii PREFACE short discussions, quizzes, and computerized exercises in which students will learn and apply key methodological concepts and skills by analyzing, and in some cases collecting and building, simple data files for real sociological data. Designed to accompany The Basics of Social Research, the workbook and statistical package take a step-by-step approach to show students how to do real sociological research, using the same data and techniques used by professional researchers, to reinforce, build on, and complement course materials. Readings in Social Research, Third Edition The concepts and methodologies of social research come to life in this interesting collection of articles specifically designed to accompany The Basics of Social Research. Diane Kholos Wysocki includes an interdisciplinary range of readings from the fields of psychology, sociology, social work, criminal justice, and political science. The articles focus on the important methods and concepts typically covered in the social research course and provide an illustrative advantage. Researching Sociology on the Internet, Third Edition This guide is designed to help sociology students do research on the Internet. Part 1 contains general information necessary to get started and answers questions about security, the type of sociology material available on the Internet, the information that is reliable and the sites that are not, the best ways to find research, and the best links to take students where they want to go. Part 2 looks at each main topic in sociology and refers students to sites where they can obtain the most enlightening research and information. For the Instructor Instructor’s Manual with Test Bank. Written by Margaret Platt Jendrek, this supplement offers the instructor brief chapter outlines, detailed chapter outlines, behavioral objectives, teaching suggestions and resources, InfoTrac College Edition exercises, Internet exercises, and possible study guide answers. In addition, for each chapter of the text, the Test Bank has multiple-choice, true-false, short-answer, and essay questions, with answers and page references. All questions are labeled as new, modified, or pickup so instructors know if the question is new to this edition of the test bank, modified but picked up from the previous edition of the test bank, or picked up straight from the previous edition of the test bank. ExamView Computerized Testing for Macintosh and Windows. This allows instructors to create, deliver, and customize printed and online tests and study guides. ExamView includes a Quick Test Wizard and an Online Test Wizard to guide instructors step by step through the process of creating tests. The test appears on screen exactly as it will print or display online. Using ExamView’s complete word-processing capabilities, instructors can enter an unlimited number of new questions or edit questions included with ExamView. Multimedia Manager with Instructor’s Resources: A Microsoft ® PowerPoint ® Tool This one-stop lecture and class preparation tool makes it easy to assemble, edit, publish, and present custom lectures for your course, using Microsoft PowerPoint. The Multimedia Manager brings together art (figures, tables, maps) from this text, preassembled Microsoft PowerPoint lecture slides, and sociology-related videos, along with video and animations from the web or your own materials—culminating in a powerful, personalized, media-enhanced presentation. The CD-ROM also contains a full Instructor’s Manual, Test Bank, and other instructor resources. JoinIn™ on TurningPoint® JoinIn on TurningPoint turns your lecture into an interactive experience for your students. Using Microsoft PowerPoint, you can poll students on key issues, check their comprehension of difficult concepts, and become “critical consumers” by watching our exclu- PREFACE sive ABC video clips and answering related methodological questions about them. Internet-Based Supplements ThomsonNOW™ Empowers students with the first assessment-centered student tutorial system for Social Research/Research Methods. Seamlessly tied to the new edition, this interactive web-based learning tool helps students gauge their unique study needs with a “pretest” for each chapter to assess their understanding of the material. They are then given a personalized study plan that offers interactive, visual and audio resources to help them master the material. They can check their progress with an interactive posttest, as well. WebTutor™ Toolbox on Blackboard and WebCT. This web-based software for students and instructors takes a course beyond the classroom to an anywhere, anytime environment. Students gain access to to the rich content from our book companion websites. Available for WebCT and Blackboard only. InfoTrac® College Edition with InfoMarks™ Available as a free option with newly purchased texts, InfoTrac College Edition gives instructors and students four months of free access to an extensive online database of reliable, full-length articles (not just abstracts) from thousands of scholarly and popular publications going back as far as 22 years. Among the journals available are American Journal of Sociology, Social Forces, Social Research, and Sociology. InfoTrac College Edition now also comes with InfoMarks, a tool that allows you to save your search parameters, as well as save links to specific articles. (Available to North American college and university students only; journals are subject to change.) Companion Website for The Basics of Social Research, Fourth Edition. (http://sociology .wadsworth.com/babbie_basics4e). The book’s companion website includes chapter-specific re- xxiii sources for instructors and students. For instructors, the site offers a password-protected instructor’s manual, Microsoft PowerPoint presentation slides, and more. For students, there is a multitude of text-specific study aids including the following: • • • • • • • • Tutorial practice quizzing that can be scored and emailed to the instructor Web links InfoTrac College Edition exercises Flashcards GSS data sets Data Analysis Primers MicroCase Online data exercises Crossword puzzles Thomson InSite for Writing and Research™ with Turnitin® originality checker. InSite features a full suite of writing, peer review, online grading, and e-portfolio applications. It is an all-in-one tool that helps instructors manage the flow of papers electronically and allows students to submit papers and peer reviews online. Also included in the suite is Turnitin, an originality checker that offers a simple solution for instructors who want a strong deterrent against plagiarism, as well as encouragement for students to employ proper research techniques. Access is available for packaging with each copy of this book. For more information, visit http://insite.thomson.com. ACKNOWLEDGMENTS It would be impossible to acknowledge adequately all the people who have influenced this book. My earlier methods text, Survey Research Methods, was dedicated to Samuel Stouffer, Paul Lazarsfeld, and Charles Glock. I again acknowledge my debt to them. Many colleagues helped me through the eleven editions of The Practice of Social Research and the first three editions of The Basics of Social Research. At this point, I particularly want to thank the in- xxiv PREFACE structors who reviewed the manuscript of this edition of Basics and made helpful suggestions: Melanie Arthur, Portland State University James W. Cassell, Henderson State University Leslie Hossfeld, University of North Carolina–Wilmington Rebecca Utz, University of Utah Gary Wyatt, Emporia State University I would also like to thank those who reviewed earlier editions: C. Neil Bull, University of Missouri–Kansas City Jeffrey A. Burr, University of Massachusetts– Boston Karen Campbell, Vanderbilt University Douglas Forbes, University of Wisconsin–Marshfield Susan Haire, University of Georgia Albert Hunter, Northwestern University Robert Kleidman, Cleveland State University Ross Koppel, University of Pennsylvania Susan E. Marshall, University of Texas–Austin Enrique Pumar, William Patterson University William G. Staples, University of Kansas Stephen F. Steele, Anne Arundel Community College Thankam Sunil, University of Texas at San Antonio Yvonne Vissing, Salem State College Over the years, I have become more and more impressed by the important role played by editors in books like this. Since 1973, I’ve worked with many sociology editors at Wadsworth, which has involved the kinds of adjustments you might need to make in as many successive marriages. Happily, this edition of the book has greatly profited from my partnership with Chris Caldeira and Sherry Symington. Perhaps you have to be a textbook author to appreciate how much difference editors make in the writing and publishing experience, but I want to report that I have been blessed with great partners. In my experience, copy editors are the invisible heroes of publishing, and it has been my good fortune and pleasure to have worked with one of the very best, Molly Roth, for several years and books. Among her many gifts, Molly has the uncanny ability to hear what I am trying to say and find ways to help others hear it. Molly’s partnership with Greg Hubit at Bookworks is something special in the publishing world, and I would not want to do a major text without them. I have dedicated this book to my granddaughter, Evelyn Fay Babbie, born during the revision of the second edition of the book, and my grandson, Henry Robert Babbie, born during the revision of the third edition. They continued to add joy to my life during the revision of the fourth edition, and I am committed to their growing up in a more humane and just world than the one they were born into. Part One Bonnie Kamin/PhotoEdit AN INTRODUCTION TO INQUIRY 1 Human Inquiry and Science 2 Paradigms, Theory, and Research 3 The Ethics and Politics of Social Research S cience is a familiar word used by everyone. Yet images of science differ greatly. For some, science is mathematics; for others, it’s white coats and laboratories. It’s often confused with technology or equated with tough high school or college courses. Science is, of course, none of these things per se. Specifying exactly what science is, however, poses problems. Scientists, in fact, disagree on the proper definition. For the purposes of this book, we’ll look at science as a method of inquiry—a way of learning and knowing things about the world around us. Contrasted with other ways of doing this, science has some special characteristics, which we’ll examine in these opening chapters. Benjamin Spock, the renowned author and pediatrician, began his books on child care by assuring new parents that they already knew more about child care than they thought they did. I want to begin this book on a similar note. Before you’ve read very far, you’ll see that you already know a great deal about the practice of social scientific research. In fact, you’ve been conducting scientific research all your life. From that perspective, this book aims at helping you sharpen skills you already have and perhaps showing you some tricks that may not have occurred to you. By examining the fundamental characteristics and issues that make science different from other ways of knowing things, Part 1 lays the groundwork for the rest of the book. In Chapter 1, we’ll begin with a look at native human inquiry, the sort of thing you’ve been doing all your life. In the course of that examination, we’ll see some of the ways people go astray in trying to understand the world around them, and I’ll summarize the primary characteristics of scientific inquiry that guard against those errors. Chapter 2 deals with social scientific paradigms and theories, as well as the links between theory and research. We’ll look at some of the theoretical paradigms that shape the nature of inquiry, largely 2 determining what scientists look for and how they interpret what they see. Whereas most of this book concerns the art and science of doing social research, Chapter 3 introduces some of the political and ethical considerations that affect social research. We’ll see the ethical norms that social researchers follow when they design and implement research. We’ll also see how social contexts affect social research. Overall, Part 1 constructs a backdrop against which to view the more specific aspects of research design and execution. By the time you complete Part 1, you should be ready to look at some of the more concrete aspects of social research. HUMAN INQUIRY AND SCIENCE Bonnie Kamin/PhotoEdit 1 What You’ll Learn in This Chapter We’ll examine how people learn about their world and look at the mistakes they make along the way. We’ll also begin to see what makes science different from other ways of knowing. In this chapter . . . WHAT DO YOU THINK? Introduction Looking for Reality Ordinary Human Inquiry Tradition Authority Errors in Inquiry and Some Solutions What’s Really Real? The Foundations of Social Science Theory, Not Philosophy or Belief Social Regularities Aggregates, Not Individuals A Variable Language The Purposes of Social Research The Ethics of Human Inquiry Some Dialectics of Social Research Idiographic and Nomothetic Explanation Inductive and Deductive Theory Quantitative and Qualitative Data Pure and Applied Research INTRODUCTION This book is about knowing things—not so much what we know as how we know it. Let’s start by examining a few things you probably know already. You know the world is round. You probably also know it’s cold on the dark side of the moon, and you know people speak Japanese in Japan. You know that vitamin C can prevent colds and that unprotected sex can result in AIDS. How do you know? If you think for a minute, you’ll see you know these things because somebody told them to you, and you believed them. You may have read in National Geographic that people 4 The decision to have a baby is deeply personal. No one is in Image not available due to copyright restrictions charge of who will have babies in the United States in any given year, or of how many will be born. Whereas you must get a license to marry or go fishing, you do not need a license to have a baby. Many couples delay pregnancy, some pregnancies happen by accident, and some pregnancies are planned. Given all these uncertainties and idiosyncrasies, how can baby food and diaper manufacturers know how much to produce from year to year? By the end of this chapter, you should be able to answer this question. See the “What Do You Think? Revisited” box toward the end of the chapter. speak Japanese in Japan, and that made sense to you, so you didn’t question it. Perhaps your physics or astronomy instructor told you it was cold on the dark side of the moon, or maybe you read it on the NASA website. Some of the things you know seem obvious to you. If I asked you how you know the world is round, you’d probably say, “Everybody knows that.” There are a lot of things everybody knows. Of course, at one time, everyone “knew” the world was flat. Most of what you know is a matter of agreement and belief. Little of it is based on personal experience and discovery. A big part of growing up in any society, in fact, is the process of learning to accept what everybody around you “knows” is so. If you don’t know those same things, you can’t really be a part of the group. If you were to question seriously that the world is really round, you’d quickly find yourself set apart from other people. You might be sent to live in a hospital with others who ask questions like that. So, most of what you know is a matter of believing what you’ve been told. Understand that there’s nothing wrong with you in that respect. That’s simply the way human societies are structured. The basis of knowledge is agreement. Because you can’t learn through personal experience and discovery alone all you need to know, things are set up so you can simply believe what others tell you. You know some things through tradition, others from “experts.” I’m not saying you shouldn’t question this received knowledge; I’m just drawing your attention to the way you and society normally get along regarding what’s so. There are other ways of knowing things, however. In contrast to knowing things through agreement, you can know them through direct experience—through observation. If you dive into a glacial stream flowing through the Canadian Rockies, you don’t need anyone to tell you it’s cold. When your experience conflicts with what everyone else knows, though, there’s a good chance you’ll surrender your experience in favor of the agreement. For example, imagine you’ve come to a party at my house. It’s a high-class affair, and the drinks and food are excellent. In particular, you’re taken by one of the appetizers I bring around on a tray: a breaded, deep-fried tidbit that’s especially zesty. You have a couple—they’re so delicious! You have more. Soon you’re subtly moving around the room to be wherever I am when I arrive with a tray of these nibblies. Finally, you can contain yourself no longer. “What are they?” you ask. I let you in on the secret: “You’ve been eating breaded, deep-fried worms!” Your response is dramatic: Your stomach rebels, and you promptly throw up all over the living room rug. What a terrible thing to serve guests! 5 Earl Babbie INTRODUCTION We learn some things by experience, others by agreement. The point of the story is that both of your feelings about the appetizer were quite real. Your initial liking for them was certainly real, but so was the feeling you had when you found out what you’d been eating. It should be evident, however, that the disgust you felt was strictly a product of the agreements you have with those around you that worms aren’t fit to eat. That’s an agreement you began the first time your parents found you sitting in a pile of dirt with half of a wriggling worm dangling from your lips. When they pried your mouth open and reached down your throat for the other half of the worm, you learned that worms are not acceptable food in our society. Aside from these agreements, what’s wrong with worms? They’re probably high in protein and 6 CHAPTER 1 HUMAN INQUIRY AND SCIENCE low in calories. Bite-sized and easily packaged, they’re a distributor’s dream. They are also a delicacy for some people who live in societies that lack our agreement that worms are disgusting. Some people might love the worms but be turned off by the deep-fried breading. Here’s a question to consider: “Are worms really good or really bad to eat?” And here’s a more interesting question: “How could you know which was really so?” This book is about answering the second question. LOOKING FOR REALITY Reality is a tricky business. You’ve probably long suspected that some of the things you “know” may not be true, but how can you really know what’s real? People have grappled with this question for thousands of years. One answer that has arisen out of that grappling is science, which offers an approach to both agreement reality and experiential reality. Scientists have certain criteria that must be met before they’ll accept the reality of something they haven’t personally experienced. In general, an assertion must have both logical and empirical support: It must make sense, and it must not contradict actual observation. Why do earthbound scientists accept the assertion that it’s cold on the dark side of the moon? First, it makes sense, because the surface heat of the moon comes from the sun’s rays. Second, the scientific measurements made on the moon’s dark side confirm the expectation. So, scientists accept the reality of things they don’t personally experience—they accept an agreement reality—but they have special standards for doing so. More to the point of this book, however, science offers a special approach to the discovery of reality through personal experience, that is, to the business of inquiry. Epistemology is the science of knowing; methodology (a subfield of epistemology) might be called the science of finding out. This book is an examination and presentation of social science methodology, or how social scientists find out about human social life. You’ll see that some of the methods coincide with the traditional image of science but others have been specially geared to sociological concerns. In the rest of this chapter, we’ll look at inquiry as an activity. We’ll begin by examining inquiry as a natural human activity, something you and I have engaged in every day of our lives. Next, we’ll look at some kinds of errors we make in normal inquiry, and we’ll conclude by examining what makes science different. We’ll see some of the ways science guards against common human errors in inquiry. The box “Social Research Making a Difference” gives an example of controlled social research challenging what “everybody knows.” Ordinary Human Inquiry Practically all people exhibit a desire to predict their future circumstances. We seem quite willing, moreover, to undertake this task using causal and probabilistic reasoning. First, we generally recognize that future circumstances are somehow caused or conditioned by present ones. We learn that swimming beyond the reef may bring an unhappy encounter with a shark. As students we learn that studying hard will result in better grades. Second, we also learn that such patterns of cause and effect are probabilistic in nature: The effects occur more often when the causes occur than when the causes are absent—but not always. Thus, students learn that studying hard produces good grades in most instances, but not every time. We recognize the danger of swimming beyond the reef, without believing that every such swim will be fatal. As we’ll see throughout the book, science makes these concepts of causality and probability more explicit and provides techniques for dealing with them more rigorously than does casual human inquiry. It sharpens the skills we already have by making us more conscious, rigorous, and explicit in our inquiries. In looking at ordinary human inquiry, we need to distinguish between prediction and understanding. Often, we can make predictions without un- LOOKING FOR REALITY 7 ISSUES AND INSIGHTS SOCIAL RESEARCH MAKING A DIFFERENCE Medication errors in hospitals kill or injure about 770,000 patients each year, and the newly developed Computerized Physician Order Entry (CPOE) systems have been widely acclaimed as the solution to this enormous problem experienced in the traditional system of handwritten prescriptions. Medical science research has generally supported the new technology, but an article in the March 9, 2005, issue of the Journal of the American Medical Association (JAMA) sent a shock wave through the medical community. The sociologist Ross Koppel and colleagues used several of the research techniques you’ll be learning in this book to test the effectiveness of the new derstanding—perhaps you can predict rain when your trick knee aches. And often, even if we don’t understand why, we’re willing to act on the basis of a demonstrated predictive ability. The racetrack buff who finds that the third-ranked horse in the third race of the day always wins will probably keep betting without knowing, or caring, why it works out that way. Whatever the primitive drives or instincts that motivate human beings, satisfying them depends heavily on the ability to predict future circumstances. However, the attempt to predict is often placed in a context of knowledge and understanding. If we can understand why things are related to one another, why certain regular patterns occur, we can predict even better than if we simply observe and remember those patterns. Thus, human inquiry aims at answering both “what” and “why” questions, and we pursue these goals by observing and figuring out. As I suggested earlier, our attempts to learn about the world are only partly linked to direct, personal inquiry or experience. Another, much larger, technology. Their conclusion: CPOE was not nearly as effective as claimed; it did not prevent errors in medication. As you can imagine, those manufacturing and selling the equipment were not thrilled by the research, and it has generated an ongoing discussion within the health care community. At last count, the study had been cited over 20,000 times in other articles, and Koppel has become a sought-after expert in this regard. Source: Kathryn Goldman Schuyler, “ Medical Errors: Sociological Research Makes News,” Sociological Practice Newsletter (American Sociological Association, Section on Sociological Practice), Winter 2006, p. 1. part comes from the agreed-on knowledge that others give us. This agreement reality both assists and hinders our attempts to find out for ourselves. To see how, consider two important sources of our secondhand knowledge—tradition and authority. Tradition Each of us inherits a culture made up, in part, of firmly accepted knowledge about the workings of the world. We may learn from others that eating too much candy will decay our teeth, that the circumference of a circle is approximately twentytwo–sevenths of its diameter, or that masturbation will blind us. We may test a few of these “truths” on our own, but we simply accept the great majority of them, the things that “everybody knows.” Tradition, in this sense of the term, offers some clear advantages to human inquiry. By accepting what everybody knows, we avoid the overwhelming task of starting from scratch in our search for regularities and understanding. Knowledge is cumulative, and an inherited body of knowledge is 8 CHAPTER 1 HUMAN INQUIRY AND SCIENCE the jumping-off point for developing more of it. We often speak of “standing on the shoulders of giants,” that is, of previous generations. At the same time, tradition may be detrimental to human inquiry. If we seek a fresh understanding of something everybody already understands and has always understood, we may be marked as fools for our efforts. More to the point, however, most of us rarely even think of seeking a different understanding of something we all “know” to be true. to start at the wrong point and push us off in the wrong direction. Authority Inaccurate Observations Quite frequently, we make mistakes in our observations. For example, what was your methodology instructor wearing on the first day of class? If you have to guess, that’s because most of our daily observations are casual and semiconscious. That’s why we often disagree about what really happened. In contrast to casual human inquiry, scientific observation is a conscious activity. Simply making observation more deliberate can reduce error. If you had to guess what your instructor was wearing the first day of class, you’d probably make a mistake. If you had gone to the first class meeting with a conscious plan to observe and record what your instructor was wearing, however, you’d likely be more accurate. (You might also need a hobby.) In many cases, both simple and complex measurement devices help guard against inaccurate observations. Moreover, they add a degree of precision well beyond the capacity of the unassisted human senses. Suppose, for example, that you had taken color photographs of your instructor that day. (See earlier comment about needing a hobby.) Despite the power of tradition, new knowledge appears every day. Aside from our personal inquiries, we benefit throughout life from new discoveries and understandings produced by others. Often, acceptance of these new acquisitions depends on the status of the discoverer. You’re more likely to believe the epidemiologist who declares that the common cold can be transmitted through kissing, for example, than to believe your uncle Pete. Like tradition, authority can both assist and hinder human inquiry. We do well to trust in the judgment of the person who has special training, expertise, and credentials in a given matter, especially in the face of controversy. At the same time, inquiry can be greatly hindered by the legitimate authority who errs within his or her own special province. Biologists, after all, do make mistakes in the field of biology. Inquiry is also hindered when we depend on the authority of experts speaking outside their realm of expertise. For example, consider the political or religious leader with no biochemical expertise who declares that marijuana is a dangerous drug. The advertising industry plays heavily on this misuse of authority by, for example, having popular athletes discuss the nutritional value of breakfast cereals or movie actors evaluate the performance of automobiles. Both tradition and authority, then, are doubleedged swords in the search for knowledge about the world. Simply put, they provide us with a starting point for our own inquiry, but they can lead us Errors in Inquiry and Some Solutions Quite aside from the potential dangers of tradition and authority, we often stumble and fall when we set out to learn for ourselves. Let’s look at some of the common errors we make in our casual inquiries and the ways science guards against those errors. Overgeneralization When we look for patterns among the specific things we observe around us, we often assume that a few similar events are evidence of a general pattern. That is, we tend to overgeneralize on the basis of limited observations. This can misdirect or impede inquiry. Imagine that you’re a reporter covering an animal-rights demonstration. You have to turn in your story in just two hours. Rushing to the scene, you start interviewing people, asking them why LOOKING FOR REALITY they’re demonstrating. If the first two demonstrators you interview give you essentially the same reason, you may simply assume that the other 3,000 would agree. Unfortunately, when your story appears, your editor gets scores of letters from protesters who were there for an entirely different reason. Scientists guard against overgeneralization by seeking a sufficiently large sample of observations. The replication of inquiry provides another safeguard. Basically, this means repeating a study and checking to see if the same results occur each time. Then, as a further test, the study may be repeated under slightly varied conditions. Selective Observation One danger of overgeneralization is that it may lead to selective observation. Once you have concluded that a particular pattern exists and have developed a general understanding of why it does, you’ll tend to focus on future events and situations that fit the pattern, and you’ll ignore those that don’t. Racial and ethnic prejudices depend heavily on selective observation for their persistence. In another example, here’s how Lewis Hill recalls growing up in rural Vermont: Haying began right after the Fourth of July. The farmers in our neighborhood believed that anyone who started earlier was sure to suffer all the storms of late June in addition to those following the holiday which the oldtimers said were caused by all the noise and smoke of gunpowder burning. My mother told me that my grandfather and other Civil War veterans claimed it always rained hard after a big battle. Things didn’t always work out the way the older residents promised, of course, but everyone remembered only the times they did. — (2000:35) Sometimes a research design will specify in advance the number and kind of observations to be made, as a basis for reaching a conclusion. If you and I wanted to learn whether women were more likely than men to support the legality of abortion, we’d commit ourselves to making a specified 9 number of observations on that question in a research project. We might select a thousand people to be interviewed on the issue. Alternately, when making direct observations of an event, such as an animal-rights demonstration, social scientists make a special effort to find “deviant cases”—those who do not fit into the general pattern. Illogical Reasoning There are other ways of handling observations that contradict our conclusions about the way things are in daily life. Surely one of the most remarkable creations of the human mind is “the exception that proves the rule.” This idea makes no sense at all. An exception can draw attention to a rule or to a supposed rule, but in no system of logic can it prove the rule it contradicts. Yet we often use this pithy saying to brush away contradictions with a simple stroke of illogic. What statisticians have called the gambler’s fallacy is another illustration of illogic in day-to-day reasoning. A consistent run of either good or bad luck is presumed to foreshadow its opposite. An evening of bad luck at poker may kindle the belief that a winning hand is just around the corner; many a poker player has stayed in a game much too long because of that mistaken belief. Conversely, an extended period of good weather may lead you to worry that it is certain to rain on the weekend picnic. Although all of us sometimes fall into embarrassingly illogical reasoning in daily life, scientists avoid this pitfall by using systems of logic consciously and explicitly. Chapter 2 will examine the logic of science in more depth. For now, it’s enough to note that logical reasoning is a conscious activity for scientists, who have colleagues around to keep them honest. These, then, are a few of the ways we go astray in our attempts to know and understand the world, and some of the ways that science protects inquiry from these pitfalls. Accurately observing and un- replication The duplication of an experiment to expose or reduce error. 10 CHAPTER 1 HUMAN INQUIRY AND SCIENCE derstanding reality is not an obvious or trivial matter. Indeed, it’s more complicated than I’ve suggested. (See the box “Applying Scientific Inquiry to Daily Life” for more on this topic.) What’s Really Real? Philosophers sometimes use the phrase naive realism to describe the way most of us operate in our daily lives. When you sit at a table to write, you probably don’t spend a lot of time thinking about whether the table is really made up of atoms, which in turn are mostly empty space. When you step into the street and see a city bus hurtling down on you, it’s not the best time to reflect on methods for testing whether the bus really exists. We all live with a view that what’s real is pretty obvious—and that view usually gets us through the day. Even so, I hope you can see that the nature of “reality” is perhaps more complex than we tend to assume. As a philosophical backdrop for the discussions to follow, let’s look at what are sometimes called premodern, modern, and postmodern views of reality (W. Anderson 1990). The Premodern View This view of reality has guided most of human history. Our early ancestors assumed that they saw things as they really were. In fact, this assumption was so fundamental that they didn’t even see it as an assumption. No cavemom said to her cavekid, “Our tribe makes an assumption that evil spirits reside in the Old Twisted Tree.” No, she said, “STAY OUT OF THAT TREE OR YOU’LL TURN INTO A TOAD!” As humans evolved and became aware of their diversity, they came to recognize that others did not always share their views of things. Thus, they may have discovered that another tribe didn’t believe the tree was wicked; in fact, the second tribe believed that the tree spirits were holy and beneficial. The discovery of this diversity led members of the first tribe to conclude that “some tribes I could name are pretty stupid.” For them, the tree was still wicked, and they expected some misguided people to be moving to Toad City. IN THE REAL WORLD APPLYING SCIENTIFIC INQUIRY TO DAILY LIFE As we proceed in this examination of social science research methods, I’ll tend to talk as though I were training you for a career as a researcher. However, I realize you may not be planning on that. As such, I want to point out ways in which the topics of this book might relate to the world at large. What you learn here may apply to your life and career in ways you might not realize. Most of the “In the Real World” boxes will point to the everyday uses of something you’ve learned in a particular chapter. In this first box, however, I want to make the general point that even if you do not end up doing social science research, you’ll be a consumer of social science research throughout your life. You’ll be hearing about which political candidate is leading another, what public opinion is on some hot issue, or which laundry detergent gets clothes cleaner. If you choose a career in law, you might have to deal with studies of “community standards.” As a social worker, you might need to assess research comparing different treatment modalities. So, even if you decide not to produce social research yourself, you can profit from becoming an informed consumer of it. The Modern View What philosophers call the modern view accepts such diversity as legitimate, a philosophical “different strokes for different folks.” As a modern thinker you would say, “I regard the spirits in the tree as evil, but I know others regard them as good. Neither of us is right or wrong. There are simply spirits in the tree. They are neither good nor evil, but different people have different ideas about them.” LOOKING FOR REALITY a. b. c. d. FIGURE 1-1 A Book. All of these are the same book, but it looks different when viewed from different locations, perspectives, or “points of view.” It’s probably easy for you to adopt the modern view. Some might regard a dandelion as a beautiful flower whereas others see only an annoying weed. To the premoderns, a dandelion has to be either one or the other. If you think it is a weed, it is really a weed, even though some people have a warped sense of beauty. In the modern view, a dandelion is simply a dandelion. The concepts “beautiful flower” and “annoying weed” are subjective points of view imposed on the plant. Neither is a quality of the plant itself, just as “good” and “evil” were concepts imposed on the spirits in the tree. The Postmodern View Philosophers also speak of a postmodern view of reality. In this view, neither the spirits nor the dandelion exists. All that’s “real” are the images we get through our points of view. Put differently, there’s nothing out there—it’s all in here. As Gertrude Stein said of Oakland, “There’s no there, there.” No matter how bizarre the postmodern view may seem to you on first reflection, it has a certain ironic inevitability. Take a moment to notice the book you’re reading; notice specifically what it 11 looks like. As you’re reading these words, it probably looks like Figure 1-1a. But does Figure 1-1a represent the way your book “really” looks? Or does it merely represent what the book looks like from your current point of view? Surely, Figures 1-1b, c, and d are equally valid representations. But these views of the book differ so much from one another. Which is the “reality”? As this example illustrates, there is no answer to the question, “What does the book really look like?” All we can offer is the different ways it looks from different points of view. Thus, according to the postmodern view, there is no “book,” only various images of it from different points of view. And all the different images are equally “true.” Now let’s apply this logic to a social situation. Imagine a husband and wife arguing. When she looks over at her quarreling husband, Figure 1-2 is what the wife sees. Take a minute to imagine what you would think and feel if you were the woman in this drawing. How would you explain to your best friend what had happened? What solutions to the conflict would seem appropriate if you were this woman? What the woman’s husband sees is another matter altogether, as shown in Figure 1-3. Take a minute to imagine the situation from his point of view. What thoughts and feelings would you have? How would you tell your best friend about it? What solutions would seem appropriate? Now suppose you’re an outside observer watching this interaction. What would it look like? Unfortunately, we can’t easily portray the third point of view without knowing something about the personal feelings, beliefs, past experiences, and so forth that you would bring to your task as “outside” observer. (Though I call you an outside observer, you are, of course, observing from inside your own mental system.) To take an extreme example, if you were a confirmed male chauvinist, you’d probably see the fight pretty much the same way the husband saw it. On the other hand, if you were committed to the view that men are unreasonable bums, you’d see things the way the wife saw them. 12 CHAPTER 1 HUMAN INQUIRY AND SCIENCE FIGURE 1-2 Wife’s Point of View. There is no question in the wife’s mind as to who is right and rational and who is out of control. FIGURE 1-3 Husband’s Point of View. There is no question in the husband’s mind as to who is right and rational and who is out of control. But imagine instead that you see two unreasonable people quarreling irrationally with each other. Would you see them both as irresponsible jerks, equally responsible for the conflict? Or would you see them as two people facing a difficult human situation, each doing the best he or she can to resolve it? Imagine feeling compassion for them and noticing how each of them attempts to end the hostility, even though the gravity of the problem keeps them fighting. Notice how different these several views are. Which is a “true” picture of what is happening between the wife and the husband? You win the prize if you notice that your own point of view would again color your perception of what is happening here. The postmodern view represents a critical dilemma for scientists. While their task is to observe and understand what is “really” happening, they are all human and, as such, have personal orientations that color what they observe and how they explain it. There is ultimately no way people can totally step outside their humanness to see and understand the world as it “really” is. There are only our several subjective views. We’ll return to this discussion in Chapter 2 when we focus in on specific scientific paradigms. Ultimately, what you’ll see is that (1) established scientific procedures sometimes allow you to deal effectively with this dilemma—that is, we can study people and help them through their difficulties without being able to view “reality” directly—and (2) the philosophical stances I’ve presented suggest a powerful range of possibilities for structuring research. Let’s turn now to the foundations of the social scientific approaches to understanding. From there we can examine the specific research techniques social scientists use. THE FOUNDATIONS OF SOCIAL SCIENCE The two pillars of science are logic and observation. A scientific understanding of the world must (1) make sense and (2) correspond with what we observe. Both elements are essential to science and relate to three major aspects of the overall scientific enterprise: theory, data collection, and data analysis. In the most general terms, scientific theory deals with logic, data collection with observation, and data analysis with patterns in what is observed THE FOUNDATIONS OF SOCIAL SCIENCE and, where appropriate, the comparison of what is logically expected with what is actually observed. Though most of this textbook deals with data collection and data analysis—demonstrating how to conduct empirical research—recognize that social science involves all three elements. As such, Chapter 2 of this book concerns the theoretical context of research; Parts 2 and 3 focus on data collection; and Part 4 offers an introduction to the analysis of data. Figure 1-4 offers a schematic view of how the book addresses these three aspects of social science. Let’s turn now to some of the fundamental issues that distinguish social science from other ways of looking at social phenomena. Theory, Not Philosophy or Belief Social scientific theory has to do with what is, not with what should be. For many centuries, however, social theory has combined these two orientations. Social philosophers liberally mixed their observations of what happened around them, their speculations about why, and their ideas about how things ought to be. Although modern social scientists may do the same from time to time, realize that social science has to do with how things are and why. This means that scientific theory—and science itself—cannot settle debates on value. Science cannot determine whether capitalism is better or worse than socialism except in terms of agreed-on criteria. To determine scientifically whether capitalism or socialism most supports human dignity and freedom we would first have to agree on some measurable definitions of dignity and freedom. Our conclusions would depend totally on this agreement and would have no general meaning beyond it. By the same token, if we could agree that suicide rates, say, or giving to charity were good measures of a religion’s quality, then we could determine scientifically whether Buddhism or Christianity is the better religion. Again, our conclusion would be inextricably tied to the given criterion. As a practical matter, people seldom agree on criteria for determining issues of value, so science is seldom useful in settling such debates. In fact, questions like 13 these are so much a matter of opinion and belief that scientific inquiry is often viewed as a threat to what is “already known.” We’ll consider this issue in more detail in Chapter 12, when we look at evaluation research. As you’ll see, social scientists have become increasingly involved in studying programs that reflect ideological points of view, such as affirmative action or welfare reform. One of the biggest problems researchers face is getting people to agree on criteria of success and failure. Yet such criteria are essential if social scientific research is to tell us anything useful about matters of value. By analogy, a stopwatch can’t tell us if one sprinter is better than another unless we first agree that speed is the critical criterion. Social science, then, can help us know only what is and why. We can use it to determine what ought to be, but only when people agree on the criteria for deciding what’s better than something else—an agreement that seldom occurs. With that understood, let’s turn now to some of the fundamental bases on which social science allows us to develop theories about what is and why. Social Regularities In large part, social scientific theory aims to find patterns in social life. That aim, of course, applies to all science, but it sometimes presents a barrier to people when they first approach social science. Actually, the vast number of formal norms in society create a considerable degree of regularity. For example, only people who have reached a certain age can vote in elections. In the U.S. military, until recently only men could participate in combat. Such formal prescriptions, then, regulate, or regularize, social behavior. Aside from formal prescriptions, we can observe other social norms that create more regularities. Republicans are more likely than Democrats theory A systematic explanation for the observations that relate to a particular aspect of life: juvenile delinquency, for example, or perhaps social stratification or political revolution. 14 CHAPTER 1 HUMAN INQUIRY AND SCIENCE Theory Religious affiliation Prejudice Education Voting behavior Social class Chapter 2 Data Collection Planning to do research Sampling Observation Data processing Chapters 4– 6 Chapter 7 Chapters 8–12 Chapters 13–14 Data Analysis x x y 34% 78% y 66% 22% Y = a + x1 + x2 + x3 + x4 + e a c d g b Part 4 FIGURE 1-4 Social Science ⴝ Theory ⴙ Data Collection ⴙ Data Analysis Application THE FOUNDATIONS OF SOCIAL SCIENCE to vote for Republican candidates. University professors tend to earn more money than do unskilled laborers. Men earn more than do women. The list of regularities could go on and on. The objection that there are always exceptions to any social regularity is also inappropriate. It doesn’t matter that a particular woman earns more money than a particular man if men earn more than women overall. The pattern still exists. Social regularities represent probabilistic patterns; a general pattern need not be reflected in 100 percent of the observable cases. This rule applies in physical science as well as social science. In genetics, for example, the mating of a blue-eyed person with a brown-eyed person will probably result in a brown-eyed child. The birth of a blue-eyed child does not challenge the observed regularity, however, because the geneticist states only that the brown-eyed offspring is more likely and, further, that brown-eyed offspring will be born in a certain percentage of the cases. The social scientist makes a similar, probabilistic prediction—that women overall are likely to earn less than are men. And the social scientist asks why this is the case. Aggregates, Not Individuals Social regularities do exist, then, and are worthy of theoretical and empirical study. As such, social scientists study primarily social patterns rather than individual ones. These patterns reflect the aggregate or collective actions and situations of many individuals. Although social scientists often study motivations and actions that affect individuals, they seldom study the individual per se. That is, they create theories about the nature of group, rather than individual, life. Sometimes the collective regularities are amazing. Consider the birthrate, for example. People have babies for an incredibly wide range of personal reasons. Some do it because their parents want them to. Some think of it as a way of completing their womanhood or manhood. Others want to hold their marriages together. Still others have babies by accident. 15 TABLE 1-1 Birthrates, United States: 1980–2003 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 15.9 15.8 15.9 15.6 15.6 15.8 15.6 15.7 16.0 16.4 16.7 16.2 15.8 15.4 15.0 14.6 14.4 14.2 14.3 14.2 14.4 14.1 13.9 14.1 *Live births per 1,000 population Source: U.S. Bureau of the Census, Statistical Abstract of the United States (Washington, DC: U.S. Government Printing Office, 2006), Table 72, p. 64. If you have had a baby, you could probably tell a much more detailed, idiosyncratic story. Why did you have the baby when you did, rather than a year earlier or later? Maybe your house burned down and you had to delay a year before you could afford to have the baby. Maybe you felt that being a family person would demonstrate maturity, which would support a promotion at work. Everyone who had a baby last year had a different set of reasons for doing so. Yet, despite this vast diversity, despite the idiosyncrasy of each individual’s reasons, the overall birthrate in a society (the number of live births per 1,000 population) is remarkably consistent from year to year. See Table 1-1 for recent birthrates for the United States. 16 CHAPTER 1 HUMAN INQUIRY AND SCIENCE If the U.S. birthrate were 15.9, 35.6, 7.8, 28.9, and 16.2 in five successive years, demographers would begin dropping like flies. As you can see, however, social life is far more orderly than that. Moreover, this regularity occurs without societywide regulation. As mentioned earlier, no one plans how many babies will be born or determines who will have them. (See the box “Birthrate Implications” for a look at how the analysis of birthrates can serve many purposes.) Social scientific theories try to explain why aggregated patterns of behavior are so regular even when the individuals participating in them may change over time. We could say that social scientists don’t seek to explain people per se. They try instead to understand the systems in which people operate, which in turn explain why people do what they do. The elements in such a system are not people but variables. A Variable Language Our most natural attempts at understanding are usually concrete and idiosyncratic. That’s just the way we think. Imagine that someone says to you, “Women ought to get back into the kitchen where they belong.” You’re likely to hear that comment in terms of what you know about the speaker. If it’s your old uncle Harry who is also strongly opposed to daylight saving time, zip codes, and personal computers, you’re likely to think his latest pronouncement simply fits into his rather dated point of view about things in general. If, on the other hand, the statement issues forth from a politician who is trailing a female challenger and who has also begun making statements about women being emotionally unfit for public office and not understanding politics, you may hear his latest comment in the context of this political challenge. In both examples, you’re trying to understand the thoughts of a particular individual. In social science, researchers go beyond that level of understanding to seek insights into classes or types of individuals. Regarding the two examples just de- IN THE REAL WORLD BIRTHRATE IMPLICATIONS Take a minute to reflect on the practical implications of the data you’ve just seen. The “What Do You Think?” box for this chapter asked how baby food and diaper manufacturers could plan production from year to year. The consistency of U.S. birthrates suggests this is not the problem it might have seemed. Who else might benefit from this kind of analysis? What about health care workers and educators? Can you think of anyone else? What if we organized birthrates by region of the country, by ethnicity, by income level, and so forth? Clearly, these additional analyses could make the data even more useful. As you learn about the options available to social researchers, I think you’ll gain an appreciation for the practical value that research can have for the whole society. scribed, they might use terms such as old-fashioned or bigot to describe the kind of person who made the comment. In other words, they try to place the individual in a set of similar individuals, according to a particular, defined concept. By examining an individual in this way, social scientists can make sense out of more than one person. In understanding what makes the bigoted politician think the way he does, they’ll also learn about other people who are “like him.” In other words, they have not been studying bigots as much as bigotry. Bigotry here is spoken of as a variable because it varies. Some people are more bigoted than others. Social scientists are interested in understanding the system of variables that causes bigotry to be high in one instance and low in another. The idea of a system composed of variables may seem rather strange, so let’s look at an analogy. The subject of a physician’s attention is the THE FOUNDATIONS OF SOCIAL SCIENCE patient. If the patient is ill, the physician’s purpose is to help that patient get well. By contrast, a medical researcher’s subject matter is different: the variables that cause a disease, for example. The medical researcher may study the physician’s patient, but only as a carrier of the disease. Of course, medical researchers care about real people, but in the actual research, patients are directly relevant only for what they reveal about the disease under study. In fact, when researchers can study a disease meaningfully without involving actual patients, they do so. Social research involves the study of variables and the attributes that compose them. Social scientific theories are written in a language of variables, and people get involved only as “carriers” of those variables. Here’s a closer look at what social scientists mean by variables and attributes. Attributes or values are characteristics or qualities that describe an object—in this case, a person. Examples include female, Asian, alienated, conservative, dishonest, intelligent, and farmer. Anything you might say to describe yourself or someone else involves an attribute. Variables, on the other hand, are logical groupings of attributes. Thus, for example, male and female are attributes, and sex is the variable composed of these two attributes. The variable occupation is composed of attributes such as farmer, professor, and truck driver. Social class is a variable composed of a set of attributes such as upper class, middle class, and lower class. Sometimes it helps to think of attributes as the categories that make up a variable. See Figure 1-5 for a schematic review of what social scientists mean by variables and attributes. The relationship between attributes and variables lies at the heart of both description and explanation in science. For example, we might describe a college class in terms of the variable sex by reporting the observed frequencies of the attributes male and female: “The class is 60 percent men and 40 percent women.” An unemployment rate can be thought of as a description of the variable employment status of a labor force in terms of the attributes employed and unemployed. Even the report 17 Some Common Social Concepts Female Age Upper class African American Young Occupation Social class Sex Race/ethnicity Plumber Variable Attributes Age Young, middle-aged, old Occupation Plumber, lawyer, Not very important Fairly important Most important thing in my life Very important Low High Interval Measure Example: IQ 95 100 105 115 110 Ratio Measure Example: Income $0 $10,000 $20,000 $30,000 $40,000 $50,000 FIGURE 5-1 Levels of Measurement. Often you can choose among different levels of measurement— nominal, ordinal, interval, or ratio—carrying progressively more amounts of information. the analysis of data (discussed in Part 4), but you need to anticipate such implications when you’re structuring any research project. Certain quantitative analysis techniques require variables that meet certain minimum levels of measurement. To the extent that the variables to be examined in a research project are limited to a particular level of measurement—say, ordinal—you should plan your analytical techniques accordingly. More precisely, you should anticipate drawing research conclusions appropriate to the levels of measurement used in your variables. For example, you might reasonably plan to determine and report the mean age of a population under study (add up all the individual ages and divide by the number of people), but you should not plan to report the mean religious affiliation, because that is a nominal variable, and the mean requires ratiolevel data. (You could report the modal—the most common—religious affiliation.) OPERATIONALIZATION CHOICES At the same time, you can treat some variables as representing different levels of measurement. Ratio measures are the highest level, descending through interval and ordinal to nominal, the lowest level of measurement. A variable representing a higher level of measurement—say, ratio—can also be treated as representing a lower level of measurement—say, ordinal. Recall, for example, that age is a ratio measure. If you wished to examine only the relationship between age and some ordinal-level variable—say, self-perceived religiosity: high, medium, and low—you might choose to treat age as an ordinal-level variable as well. You might characterize the subjects of your study as being young, middle-aged, and old, specifying what age range determined each of these groupings. Finally, age might be used as a nominal-level variable for certain research purposes. People might be grouped as being born during the depression of the 1930s or not. Another nominal measurement, based on birth date rather than just age, would be the grouping of people by astrological signs. The level of measurement you’ll seek, then, is determined by the analytical uses you’ve planned for a given variable, as you keep in mind that some variables are inherently limited to a certain level. If a variable is to be used in a variety of ways, requiring different levels of measurement, the study should be designed to achieve the highest level required. For example, if the subjects in a study are asked their exact ages, they can later be organized into ordinal or nominal groupings. You don’t necessarily need to measure variables at their highest level of measurement, however. If you’re sure to have no need for ages of people at higher than the ordinal level of measurement, you may simply ask people to indicate their age range, such as 20 to 29, 30 to 39, and so forth. In a study of the wealth of corporations, rather than seek more precise information, you may use Dun & Bradstreet ratings to rank corporations. Whenever your research purposes are not altogether clear, however, seek the highest level of measurement possible. Again, although ratio measures can later be reduced to ordinal ones, you cannot convert an ordinal measure to a ratio one. More gener- 153 ally, you cannot convert a lower-level measure to a higher-level one. That is a one-way street worth remembering. Typically a research project will tap variables at different levels of measurement. For example, William and Denise Bielby (1999) set out to examine the world of film and television, using a nomothetic, longitudinal approach (take a moment to remind yourself what that means). In what they referred to as the “culture industry,” the authors found that reputation (an ordinal variable) is the best predictor of screenwriters’ future productivity. More interestingly, they found that screenwriters who were represented by “core” (or elite) agencies were far more likely not only to find jobs (a nominal variable) but also to find jobs that paid more (a ratio variable). In other words, the researchers found that an agency’s reputation (ordinal) was a key independent variable for predicting a screenwriter’s success. The researchers also found that being older (ratio), being female (nominal), belonging to an ethnic minority (nominal), and having more years of experience (ratio) were disadvantageous for a screenwriter. On the other hand, higher earnings from previous years (measured in ordinal categories) led to more success in the future. In the researchers’ terms, “success breeds success” (Bielby and Bielby 1999:80). See the box “On to Hollywood” for more on the Bielby study. IN THE REAL WORLD ON TO HOLLYWOOD Say you want to be a Hollywood screenwriter. How might you use the results of the Bielby and Bielby (1999) study to enhance your career? Say you didn’t do so well and instead started a school for screenwriters. How could the results of the study be used to plan courses? Finally, how might the results be useful to you if you were a social activist committed to fighting discrimination in the “culture industry?” 154 CHAPTER 5 CONCEPTUALIZATION, OPERATIONALIZATION, AND MEASUREMENT Single or Multiple Indicators With so many alternatives for operationalizing social research variables, you may find yourself worrying about making the right choices. To counter this feeling, let me add a dash of certainty and stability. Many social research variables have fairly obvious, straightforward measures. No matter how you cut it, sex usually turns out to be a matter of male or female: a nominal-level variable that can be measured by a single observation—either through looking (well, not always) or through asking a question (usually). In a study involving the size of families, you’ll want to think about adopted and foster children as well as blended families, but it’s usually pretty easy to find out how many children a family has. For most research purposes, the resident population of a country is the resident population of that country—you can find the number in an almanac. A great many variables, then, have obvious single indicators. If you can get one piece of information, you have what you need. Sometimes, however, there is no single indicator that will give you the measure of a chosen variable. As discussed earlier in this chapter, many concepts are subject to varying interpretations— each with several possible indicators. In these cases, you’ll want to make several observations for a given variable. You can then combine the several pieces of information you’ve collected to create a composite measurement of the variable in question. Chapter 6 is devoted to ways of doing that, so here let’s look at just one simple illustration. Consider the concept “college performance.” All of us have noticed that some students perform well in college courses and others do not. In studying these differences, we might ask what characteristics and experiences are related to high levels of performance (many researchers have done just that). How should we measure overall performance? Each grade in any single course is a potential indicator of college performance, but it also may not typify the student’s general performance. The solution to this problem is so firmly established that it is, of course, obvious: the grade point av- erage (GPA). We assign numerical scores to each letter grade, total the points earned by a given student, and divide by the number of courses taken to obtain a composite measure. (If the courses vary in number of credits, we adjust the point values accordingly.) It’s often appropriate to create such composite measures in social research. Some Illustrations of Operationalization Choices To bring together all the operationalization choices available to the social researcher and to show the potential in those possibilities, let’s look at some of the distinct ways you might address various research problems. The alternative ways of operationalizing the variables in each case should demonstrate the opportunities that social research can present to our ingenuity and imaginations. To simplify matters, I have not attempted to describe all the research conditions that would make one alternative superior to the others, though in a given situation they would not all be equally appropriate. 1. Are women more compassionate than men? a. Select a group of subjects for study, with equal numbers of men and women. Present them with hypothetical situations that involve someone’s being in trouble. Ask them what they would do if they were confronted with that situation. What would they do, for example, if they came across a small child who was lost and crying for his or her parents? Consider any answer that involves helping or comforting the child as an indicator of compassion. See whether men or women are more likely to indicate they would be compassionate. b. Set up an experiment in which you pay a small child to pretend that he or she is lost. Put the child to work on a busy sidewalk and observe whether men or women are more likely to offer assistance. Also be sure to count the total number of men and women who walk by, because there may OPERATIONALIZATION CHOICES be more of one than the other. If that’s the case, simply calculate the percentage of men and the percentage of women who help. c. Select a sample of people and do a survey in which you ask them what organizations they belong to. Calculate whether women or men are more likely to belong to those that seem to reflect compassionate feelings. To take account of men who belong to more organizations than do women in general—or vice versa—do this: For each person you study, calculate the percentage of his or her organizational memberships that reflect compassion. See if men or women have a higher average percentage. 2. Are sociology students or accounting students better informed about world affairs? a. Prepare a short quiz on world affairs and arrange to administer it to the students in a sociology class and in an accounting class at a comparable level. If you want to compare sociology and accounting majors, be sure to ask students what they are majoring in. b. Get the instructor of a course in world affairs to give you the average grades of sociology and accounting students in the course. c. Take a petition to sociology and accounting classes that urges that “the United Nations headquarters be moved to New York City.” Keep a count of how many in each class sign the petition and how many inform you that the UN headquarters is already located in New York City. 3. Do people consider New York or California the better place to live? a. Consulting the Statistical Abstract of the United States or a similar publication, check the migration rates into and out of each state. See if you can find the numbers moving directly from New York to California and vice versa. b. The national polling companies—Gallup, Harris, Roper, and so forth—often ask people what they consider the best state to 155 live in. Look up some recent results in the library or through your local newspaper. c. Compare suicide rates in the two states. 4. Who are the most popular instructors on your campus—those in the social sciences, the natural sciences, or the humanities? a. If your school has formal student evaluations of instructors, review some recent results and compute the average ratings of each group. b. Begin visiting the introductory courses given in each group of disciplines and measure the attendance rate of each class. c. In December, select a group of faculty in each of the three divisions and ask them to keep a record of the numbers of holiday greeting cards and presents they receive from admiring students. See who wins. The point of these examples is not necessarily to suggest respectable research projects but to illustrate the many ways variables can be operationalized. The box “Measuring College Satisfaction” briefly overviews the preceding steps in terms of a concept mentioned at the outset of this chapter. Operationalization Goes On and On Although I’ve discussed conceptualization and operationalization as activities that precede data collection and analysis—for example, you must design questionnaire items before you send out a questionnaire—these two processes continue throughout any research project, even if the data have been collected in a structured mass survey. As we’ve seen, in less-structured methods such as field research, the identification and specification of relevant concepts is inseparable from the ongoing process of observation. As a researcher, always be open to reexamining your concepts and definitions. The ultimate purpose of social research is to clarify the nature of social life. The validity and utility of what you learn in this regard doesn’t depend on when you first figured out how to look at things any more than it 156 CHAPTER 5 CONCEPTUALIZATION, OPERATIONALIZATION, AND MEASUREMENT IN THE REAL WORLD MEASURING COLLEGE SATISFACTION Early in this chapter, we considered “college satisfaction” as an example of a concept we may often talk about casually. To study such a concept, however, we need to engage in the processes of conceptualization and operationalization. I’ll sketch out the process briefly and you might try your hand at expanding on my comments. What are some of the dimensions of college satisfaction? Here are a few to get you started: Academic quality: faculty, courses, majors Physical facilities: classrooms, dorms, cafeteria, grounds Athletics and extracurricular activities Costs and availability of financial aid Sociability of students, faculty, staff Security, crime on campus What are some more dimensions that might be relevant to students’ satisfaction or dissatisfaction with their school? How would you measure each of these dimensions? One method would be to ask a sample of students, “How would you rate your level of satisfaction with each of the following?” giving them a list of items similar to those listed here and providing a set of categories for them to use (such as very satisfied, satisfied, dissatisfied, very dissatisfied). But suppose you didn’t have the time or money to conduct a survey and were interested in comparing overall levels of satisfaction at several schools. What data about schools (the unit of analysis) might give you the answer you were interested in? Retention rates might be one general indicator of satisfaction. Can you think of others? matters whether you got the idea from a learned textbook, a dream, or your brother-in-law. CRITERIA OF MEASUREMENT QUALITY This chapter has come some distance. It began with the bald assertion that social scientists can measure anything that exists. Then we discovered that most of the things we might want to measure and study don’t really exist. Next we learned that it’s possible to measure them anyway. Now we conclude the chapter with a discussion of some of the yardsticks against which we judge our relative success or failure in measuring things—even things that don’t exist. Precision and Accuracy To begin, measurements can be made with varying degrees of precision. As we saw in the discussion of operationalization, precision concerns the fineness of distinctions made between the attributes that compose a variable. The description of a woman as “43 years old” is more precise than “in her forties.” Saying a street-corner gang was formed in the summer of 1996 is more precise than saying “during the 1990s.” As a general rule, precise measurements are superior to imprecise ones, as common sense suggests. There are no conditions under which imprecise measurements are intrinsically superior to precise ones. Even so, exact precision is not always necessary or desirable. If knowing that a woman is in her forties satisfies your research requirements, then any additional effort invested in learning her precise age is wasted. The operationalization of concepts, then, must be guided partly by an understanding of the degree of precision required. If your needs are not clear, be more precise rather than less. Don’t confuse precision with accuracy, however. Describing someone as “born in New England” is less precise than “born in Stowe, Vermont”—but suppose the person in question was actually born CRITERIA OF MEASUREMENT QUALITY in Boston. The less-precise description, in this instance, is more accurate, a better reflection of the real world. Precision and accuracy are obviously important qualities in research measurement, and they probably need no further explanation. When social researchers construct and evaluate measurements, however, they pay special attention to two technical considerations: reliability and validity. Reliability In the abstract, reliability is a matter of whether a particular technique, applied repeatedly to the same object, yields the same result each time. Let’s say you want to know how much I weigh. (No, I don’t know why.) As one technique, say you ask two different people to estimate my weight. If the first person estimates 150 pounds and the other estimates 300, we have to conclude that the technique of having people estimate my weight isn’t very reliable. Suppose, as an alternative, that you use a bathroom scale as your measurement technique. I step on the scale twice, and you note the same result each time. The scale has presumably reported the same weight both times, indicating that the scale provides a more reliable technique for measuring a person’s weight than does asking people to estimate it. Reliability, however, does not ensure accuracy any more than does precision. Suppose I’ve set my bathroom scale to shave five pounds off my weight just to make me feel better. Although you would (reliably) report the same weight for me each time, you would always be wrong. This new element, called bias, is discussed in Chapter 7. For now, just be warned that reliability does not ensure accuracy. Let’s suppose we’re interested in studying morale among factory workers in two different kinds of factories. In one set of factories, workers have specialized jobs, reflecting an extreme division of labor. Each worker contributes a tiny part to the overall process performed on a long assembly line. 157 In the other set of factories, each worker performs many tasks, and small teams of workers complete the whole process. How should we measure morale? Following one strategy, we could observe the workers in each factory, noticing such things as whether they joke with one another, whether they smile and laugh a lot, and so forth. We could ask them how they like their work and even ask them whether they think they would prefer their current arrangement or the other one being studied. By comparing what we observed in the different factories, we might reach a conclusion about which assembly process produces the higher morale. Notice that I’ve just described a qualitative measurement procedure. Now let’s look at some reliability problems inherent in this method. First, how you and I are feeling when we do the observing will likely color what we see. We may misinterpret what we observe. We may see workers kidding each other but think they’re having an argument. We may catch them on an off day. If we were to observe the same group of workers several days in a row, we might arrive at different evaluations on each day. If several observers evaluated the same behavior, on the other hand, they similarly might arrive at different conclusions about the workers’ morale. Here’s another, quantitative approach to assessing morale. Suppose we check the company records to see how many grievances have been filed with the union during some fixed period. Presumably this would be an indicator of morale: the more grievances, the lower the morale. This measurement strategy would appear to be more reliable: Counting up the grievances over and over, we should keep arriving at the same number. reliability That quality of measurement method that suggests that the same data would have been collected each time in repeated observations of the same phenomenon. In the context of a survey, we would expect that the question “Did you attend religious services last week?” would have higher reliability than the question “About how many times have you attended religious services in your life?” This is not to be confused with validity. 158 CHAPTER 5 CONCEPTUALIZATION, OPERATIONALIZATION, AND MEASUREMENT If you’re thinking that the number of grievances doesn’t necessarily measure morale, you’re worrying about validity, not reliability. We’ll discuss validity in a moment. The point for now is that the last method is more like my bathroom scale—it gives consistent results. In social research, reliability problems crop up in many forms. Reliability is a concern every time a single observer is the source of data, because we have no certain guard against the impact of that observer’s subjectivity. We can’t tell for sure how much of what’s reported originated in the situation observed and how much came from the observer. Subjectivity is a problem not only with single observers, however. Survey researchers have known for a long time that different interviewers, because of their own attitudes and demeanors, get different answers from respondents. Or, if we were to conduct a study of newspapers’ editorial positions on some public issue, we might create a team of coders to take on the job of reading hundreds of editorials and classifying them in terms of their position on the issue. Unfortunately, different coders will code the same editorial differently. Or we might want to classify a few hundred specific occupations in terms of some standard coding scheme, say a set of categories created by the Department of Labor or by the Census Bureau. You and I would not place all those occupations in the same categories. Each of these examples illustrates problems of reliability. Similar problems arise whenever we ask people to give us information about themselves. Sometimes we ask questions that people don’t know the answers to: How many times, if any, have you been to religious services this year? Sometimes we ask people about things they consider totally irrelevant: Are you satisfied with China’s current relationship with Albania? In such cases, people will answer differently at different times because they’re making up answers as they go. Sometimes we explore issues so complicated that a person who had a clear opinion in the matter might arrive at a different interpretation of the question when asked a second time. So how do you create reliable measures? If your research design calls for asking people for information, you can be careful to ask only about things the respondents are likely to know the answer to. Ask about things relevant to them, and be clear in what you’re asking. Of course, these techniques don’t solve every possible reliability problem. Fortunately, social researchers have developed several techniques for cross-checking the reliability of the measures they devise. Test-Retest Method Sometimes it’s appropriate to make the same measurement more than once, a technique called the test-retest method. If you don’t expect the information being sought to change, then you should expect the same response both times. If answers vary, the measurement method may, to the extent of that variation, be unreliable. Here’s an illustration. In their research on Health Hazard Appraisal (HHA), a part of preventive medicine, Jeffrey Sacks, W. Mark Krushat, and Jeffrey Newman (1980) wanted to determine the risks associated with various background and lifestyle factors, making it possible for physicians to counsel their patients appropriately. By knowing patients’ life situations, physicians could advise them on their potential for survival and on how to improve it. This purpose, of course, depended heavily on the accuracy of the information gathered about each subject in the study. To test the reliability of their information, Sacks and his colleagues had all 207 subjects complete a baseline questionnaire that asked about their characteristics and behavior. Three months later, a follow-up questionnaire asked the same subjects for the same information, and the results of the two surveys were compared. Overall, only 15 percent of the subjects reported the same information in both studies. Sacks and his colleagues reported the following research findings: Almost 10 percent of subjects reported a different height at follow-up examination. Parental CRITERIA OF MEASUREMENT QUALITY age was changed by over one in three subjects. One parent reportedly aged 20 chronologic years in three months. One in five ex-smokers and ex-drinkers have apparent difficulty in reliably recalling their previous consumption pattern. — (1980:730) Some subjects had erased all traces of previously reported heart murmurs, diabetes, emphysema, arrest records, and thoughts of suicide. One subject’s mother, deceased in the first questionnaire, was apparently alive and well in time for the second. One subject had one ovary missing in the first study but present in the second. In another case, an ovary present in the first study was missing in the second study—and had been for ten years! One subject was reportedly 55 years old in the first study and 50 years old three months later. (You have to wonder whether the physician-counselors could ever have the impact on their patients that their patients’ memories had.) Thus, test-retest revealed that this 5. 'Break a leg' a. Good luck b. First prize c. Christmas d. Acting e. Smile for a picture FIGURE 13-1 Matching Signs and Their Meanings some “emoticons” like : ) —another example of semiotics.) While there is no doubt a story behind each of the linkages in Figure 13-1, the meanings you and I “know” today are socially constructed. Semiotic analysis involves a search for the meanings intentionally or unintentionally attached to signs. Consider the sign shown in Figure 13-2, from a hotel lobby in Portland, Oregon. What’s being communicated by the rather ambiguous sign? The first sentence seems to be saying that the hotel is up to date with the current move away from tobacco in the United States. Guests who want a smoke-free environment need look no farther: This is a healthy place to stay. At the same time, says the second sentence, the hotel would not like to be seen as inhospitable to smokers. There’s room for everyone under this roof. No one need feel excluded. This sign is more easily understood within a marketing paradigm than one of logic. The “signs” examined in semiotics, of course, are not limited to this kind of sign. Most are quite different, in fact. Signs are any things that are assigned special meanings. They can include such things as logos, animals, people, and consumer products. Sometimes the symbolism is subtle. You can find a classic analysis in Erving Goffman’s Gender Advertisements (1979). Goffman focused on advertising pictures found in magazines and newspapers. The overt purpose of the ads, of course, was to sell specific products. But what else was semiotics The study of signs and the meanings associated with them. This is commonly associated with content analysis. CHAPTER 13 QUALITATIVE DATA ANALYSIS Earl Babbie 420 FIGURE 13-2 Mixed Signals? communicated? What in particular did the ads say about men and women? Analyzing pictures containing both men and women, Goffman was struck by the fact that men were almost always bigger and taller than the women accompanying them. (In many cases, in fact, the picture managed to convey the distinct impression that the women were merely accompanying the men.) Although the most obvious explanation is that men are, on average, heavier and taller than women, Goffman suggested the pattern had a different meaning: that size and placement implied status. Those larger and taller presumably had higher social standing—more power and authority (1979:28). Goffman suggested that the ads communicated that men were more important than women. In the spirit of Freud’s comment that “sometimes a cigar is just a cigar” (he was a smoker), how would you decide whether the ads simply reflected the biological differences in the average sizes of men and women or whether they sent a message about social status? In part, Goffman’s conclusion was based on an analysis of the exceptional cases: those in which the women appeared taller than the men. In these cases, the men were typically of a lower social status—the chef beside the society matron, for example. This confirmed Goffman’s main point that size and height indicated social status. The same conclusion could be drawn from pictures with men of different heights. Those of higher status were taller, whether it was the gentleman speaking to a waiter or the boss guiding the work of his younger assistants. Where actual height was unclear, Goffman noted the placement of heads in the picture. The assistants were crouching down while the boss leaned over them. The servant’s head was bowed so it was lower than that of the master. The latent message conveyed by the ads, then, was that the higher a person’s head appeared in the ad, the more important that person was. And in the great majority of ads containing men and women, the former were clearly portrayed as more important. The subliminal message in the ads, whether intended or not, was that men are more powerful and enjoy a higher status than do women. Goffman examined several differences besides physical size in the portrayal of men and women. QUALITATIVE DATA PROCESSING As another example, men were typically presented in active roles, women in passive ones. The (male) doctor examined the child while the (female) nurse or mother looked on, often admiringly. A man guided a woman’s tennis stroke (all the while keeping his head higher than hers). A man gripped the reins of his galloping horse, while a woman rode behind him with her arms wrapped around his waist. A woman held the football, while a man kicked it. A man took a photo, which contained only women. Goffman suggested that such pictorial patterns subtly perpetuated a host of gender stereotypes. Even as people spoke publicly about gender equality, these advertising photos established a quiet backdrop of men and women in their “proper roles.” Conversation Analysis Ethnomethodology, as you’ll recall, aims to uncover the implicit assumptions and structures in social life. Conversation analysis (CA) seeks to pursue that aim through an extremely close scrutiny of the way we converse with one another. In the examination of ethnomethodology in Chapter 10, you saw some examples of conversation analysis. Here we’ll look a little more deeply into that technique. David Silverman (1993:125f), reviewing the work of other CA theorists and researchers, speaks of three fundamental assumptions. First, conversation is a socially structured activity. Like other social structures, it includes established rules of behavior. For example, we’re expected to take turns, with only one person speaking at a time. In telephone conversations, the person answering the call is expected to speak first (as in “Hello”). You can verify the existence of this rule, incidentally, by picking up the phone without speaking. You may recall that this is the sort of thing ethnomethodologists tend to do. Second, Silverman points out that conversations must be understood contextually. The same utterance will have totally different meanings in different contexts. For example, notice how the 421 meaning of “Same to you!” varies if preceded by “I don’t like your looks” or by “Have a nice day.” Third, CA aims to understand the structure and meaning of conversation through excruciatingly accurate transcripts of conversations. Not only are the exact words recorded, but all the uhs, ers, bad grammar, and pauses are also noted. Pauses, in fact, are measured to the nearest tenth of a second. The practical uses of this type of analysis are many. Ann Marie Kinnell and Douglas Maynard (1996), for example, analyzed conversations between staff and clients at an HIV testing clinic to examine how information about safe sex was communicated. Among other things, they found that the staff tended to provide standard information rather than try to speak directly to a client’s specific circumstances. Moreover, they seemed reluctant to give direct advise about safe sex, settling for information alone. These discussions should give you some sense of the variety of qualitative analysis methods available to researchers. Now let’s look at some of the data-processing and data-analysis techniques commonly used in qualitative research. QUALITATIVE DATA PROCESSING Let me begin this section with a warning. The activity we are about to examine is as much art as science. At the very least, there are no cut-anddried steps that guarantee success. It’s a lot like learning how to paint with watercolors or compose a symphony. You can certainly gain education in such activities; you can even take university courses in both. Each has its own conventions and techniques as well as tips you may find useful as you set out to create art or music. However, instruction can carry you only so far. The final product must come from you. Much the same can be said of qualitative data processing. conversation analysis (CA) A meticulous analysis of the details of conversation, based on a complete transcript that includes pauses, hems, and also haws. 422 CHAPTER 13 QUALITATIVE DATA ANALYSIS This section presents some ideas on coding qualitative data, writing memos, and mapping concepts graphically. Although far from a “how-to” manual, these ideas give a useful starting point for finding order in qualitative data. tion, we’ll assume that you’ll be doing your coding manually. The next-to-last section of the chapter will illustrate the use of computer programs for qualitative data analysis. Coding Units As you may recall from the earlier Coding Whether you engage in participant observation, in-depth interviewing, collecting biographical narratives, doing content analysis, or some other form of qualitative research, you will eventually possess a growing mass of data—most typically in the form of textual materials. What do you do next? The key process in the analysis of qualitative social research data is coding—classifying or categorizing individual pieces of data—coupled with some kind of retrieval system. Together, these procedures allow you to retrieve materials you may later be interested in. Let’s say you’re chronicling the growth of a social movement. You recall writing up some notes about the details of the movement’s earliest beginnings. Now you need that information. If all your notes have been catalogued by topic, retrieving those you need should be straightforward. As a simple format for coding and retrieval, you might have created a set of file folders labeled with various topics, such as “History.” Data retrieval in this case means pulling out the “History” folder and rifling through the notes contained therein until you find what you need. As you’ll see later in this chapter, several sophisticated computer programs allow for a faster, more certain, and more precise retrieval process. Rather than looking through a “History” file, you can go directly to notes dealing with the “Earliest History” or the “Founding” of the movement. Coding has another, even more important purpose. As discussed earlier, the aim of data analysis is the discovery of patterns among the data, patterns that point to a theoretical understanding of social life. The coding and relating of concepts is key to this process and requires a more refined system than a set of manila folders. In this sec- discussion of content analysis, for statistical analysis it’s important to identify a standardized unit of analysis prior to coding. If you were comparing American and French novels, for example, you might evaluate and code sentences, paragraphs, chapters, or whole books. It would be important, however, to code the same units for each novel analyzed. This uniformity is necessary in a quantitative analysis, as it allows us to report something like “Twenty-three percent of the paragraphs contained metaphors.” This is only possible if we’ve coded the same unit—paragraphs—in each of the novels. Coding data for a qualitative analysis, however, is quite different. The concept is the organizing principle for qualitative coding. Here the units of text appropriate for coding will vary within a given document. Thus, in a study of organizations, “Size” might require only a few words per coding unit, whereas “Mission” might take a few pages. Or, a lengthy description of a heated stockholders meeting might be coded as “Internal Dissent.” Realize also that a given code category may be applied to textual materials of quite different lengths. For example, some references to the organization’s mission may be brief, others lengthy. Whereas standardization is a key principle in quantitative analysis, this is not the case in qualitative analysis. Coding as a Physical Act Before continuing with the logic of coding, let’s take a moment to see what it actually looks like. Lofland and colleagues offer this description of manual filing: Prior to the widespread availability of personal computers beginning in the late 1980s, coding frequently took the specific physical form of filing. The researcher established an expand- QUALITATIVE DATA PROCESSING ing set of file folders with code names on the tabs and physically placed either the item of data itself or a note that referenced its location in another file folder. Before photocopying was easily available and cheap, some fieldworkers typed their fieldnotes with carbon paper, wrote codes in the margins of the copies of the notes, and cut them up with scissors. They then placed the resulting slips of paper in corresponding file folders. — (2006:203) As these researchers point out, personal computers have greatly simplified this task. However, the image of slips of paper that contain text and are put in folders representing code categories is useful for understanding the process of coding. In the next section, when I suggest that we code a textual passage with a certain code, imagine that we have the passage typed on a slip of paper and that we place it in a file folder bearing the name of the code. Whenever we assign two codes to a passage, imagine placing duplicate copies of the passage in two different folders representing the two codes. Creating Codes So, what should your code categories be? Glaser and Strauss (1967:101f ) allow for the possibility of coding data for the purpose of testing hypotheses that have been generated by prior theory. In that case, then, the theory would suggest the codes, in the form of variables. In this section, however, we’re going to focus on the more common process of open coding. Strauss and Corbin define it as follows: To uncover, name, and develop concepts, we must open up the text and expose the thoughts, ideas, and meanings contained therein. Without the first analytic step, the rest of the analysis and the communication that follows could not occur. Broadly speaking, during open coding, data are broken down into discrete parts, closely examined, and compared for similarities and differences. Events, happenings, objects, and actions/interactions that are found to be conceptually similar in nature or related in meaning are 423 grouped under more abstract concepts termed “categories.” — (1998:102) Open coding is the logical starting point for GTM qualitative coding. Beginning with some body of text (part of an interview, for example), you read and reread a passage, seeking to identify the key concepts contained within it. Any particular piece of data may be given several codes, reflecting as many concepts. For example, notice all the concepts contained in this comment by a student interviewee: I thought the professor should have given me at least partial credit for the homework I turned in. Some obvious codes are “Professor,” “Homework,” and “Grading.” The result of open coding is the identification of numerous concepts relevant to the subject under study. The open coding of more and more text will lengthen the list of codes. Besides open coding, two other types of coding take place in this method. Axial coding aims to identify the core concepts in the study. Although axial coding uses the results of open coding, more concepts can be identified through continued open coding after the axial coding has begun. Axial coding involves a regrouping of the data, in which the researcher uses the open code categories and looks for more-analytical concepts. For example, the passage just given also carries the concept of “perceptions of fairness,” which might appear frequently in the student interviews, thereby suggesting that it’s an important element in understanding students’ concerns. Another axial code reflected in the student comment might be “power relationships,” because the professor is seen to exercise power over the student. open coding The initial classification and labeling of concepts in qualitative data analysis. In open coding, the codes are suggested by the researchers’ examination and questioning of the data. axial coding A reanalysis of the results of open coding in Grounded Theory Method, aimed at identifying the important, general concepts. 424 CHAPTER 13 QUALITATIVE DATA ANALYSIS The last kind of coding, selective coding, seeks to identify the central code in the study: the one that all the other codes related to. Both of the axial codes just mentioned might be restructured as aspects of a more general concept: “professorstudent relationships.” Of course, in a real data analysis, decisions such as the ones we’ve been discussing would arise from masses of textual data, not from a single quotation. The basic notion of the Grounded Theory Method is that patterns of relationships can be teased out of an extensive, in-depth examination of a large body of observations. Here’s a concrete example to illustrate how you might engage in this form of analysis. Suppose you’re interested in the religious bases for homophobia. You’ve interviewed some people opposed to homosexuality who cite a religious basis for their feelings. Specifically, they refer you to these passages in the Book of Leviticus (Revised Standard Version): 18:22 You shall not lie with a male as with a woman; it is an abomination. 20:13 If a man lies with a male as with a woman, both of them have committed an abomination; they shall be put to death, their blood is upon them. Although the point of view expressed here seems unambiguous, you might decide to examine it in more depth. Perhaps a qualitative analysis of Leviticus can yield a fuller understanding of where these injunctions against homosexuality fit into the larger context of Judeo-Christian morality. Let’s start our analysis by examining the two passages just quoted. We might begin by coding each passage with the label “Homosexuality.” This is clearly a key concept in our analysis. Whenever we focus on the issue of homosexuality in our selective coding In Grounded Theory Method, this analysis builds on the results of open coding and axial coding to identify the central concept that organizes the other concepts that have been identified in a body of textual materials. analysis of Leviticus, we want to consider these two passages. Because homosexuality is such a key concept, let’s look more closely into what it means within the data under study. We first notice the way homosexuality is identified: a man lying with a man “as with a woman.” Although we can imagine a lawyer seeking admission to heaven saying, “But here’s my point; if we didn’t actually lie down . . . “ it seems safe to assume the passage refers to having sex, though it is not clear what specific acts might or might not be included. Notice, however, that the injunctions appear to concern male homosexuality only; lesbianism is not mentioned. In our analysis, then, each of these passages might also be coded “Male Homosexuality.” This illustrates two more aspects of coding: (1) Each unit can have more than one code and (2) hierarchical codes (one included within another) can be used. Now each passage has two codes assigned to it. An even more general code might be introduced at this point: “Prohibited Behavior.” This is important for two reasons. First, homosexuality is not inherently wrong, from an analytical standpoint. The purpose of the study is to examine the way it’s made wrong by the religious texts in question. Second, our study of Leviticus may turn up other behaviors that are prohibited. There are at least two more critical concepts in the passages: “Abomination” and “Put to Death.” Notice that whereas these are clearly related to “Prohibited Behavior,” they are hardly the same. Parking without putting money in the meter is prohibited, but few would call it an abomination and fewer still would demand the death penalty for that transgression. Let’s assign these two new codes to our first two passages. At this point, we want to branch out from the two key passages and examine the rest of Leviticus. We therefore examine and code each of the remaining chapters and verses. In our subsequent analyses, we’ll use the codes we have already and add new ones as appropriate. When we do add new codes, it will be important to review the pas- QUALITATIVE DATA PROCESSING sages already coded to see whether the new codes apply to any of them. Here are the passages we decide to code “Abomination.” (I’ve boldfaced the abominations.) 7:18 If any of the flesh of the sacrifice of his peace offering is eaten on the third day, he who offers it shall not be accepted, neither shall it be credited to him; it shall be an abomination, and he who eats of it shall bear his iniquity. 7:21 And if any one touches an unclean thing, whether the uncleanness of man or an unclean beast or any unclean abomination, and then eats of the flesh of the sacrifice of the LORD’s peace offerings, that person shall be cut off from his people. 11:10 But anything in the seas or the rivers that has not fins and scales, of the swarming creatures in the waters and of the living creatures that are in the waters, is an abomination to you. 11:11 They shall remain an abomination to you; of their flesh you shall not eat, and their carcasses you shall have in abomination. 11:12 Everything in the waters that has not fins and scales is an abomination to you. 11:13 And these you shall have in abomination among the birds, they shall not be eaten, they are an abomination: the eagle, the vulture, the osprey, 11:14 the kite, the falcon according to its kind, 11:15 every raven according to its kind, 11:16 the ostrich, the nighthawk, the sea gull, the hawk according to its kind, 11:17 the owl, the cormorant, the ibis, 11:18 the water hen, the pelican, the carrion vulture, 11:19 the stork, the heron according to its kind, the hoopoe, and the bat. 11:20 All winged insects that go upon all fours are an abomination to you. 11:41 Every swarming thing that swarms upon the earth is an abomination; it shall not be eaten. 425 11:42 Whatever goes on its belly, and whatever goes on all fours, or whatever has many feet, all the swarming things that swarm upon the earth, you shall not eat; for they are an abomination. 11:43 You shall not make yourselves abominable with any swarming thing that swarms; and you shall not defile yourselves with them, lest you become unclean. 18:22 You shall not lie with a male as with a woman; it is an abomination. 19:6 It shall be eaten the same day you offer it, or on the morrow; and anything left over until the third day shall be burned with fire. 19:7 If it is eaten at all on the third day, it is an abomination; it will not be accepted, 19:8 and every one who eats it shall bear his iniquity, because he has profaned a holy thing of the LORD; and that person shall be cut off from his people. 20:13 If a man lies with a male as with a woman, both of them have committed an abomination; they shall be put to death, their blood is upon them. 20:25 You shall therefore make a distinction between the clean beast and the unclean, and between the unclean bird and the clean; you shall not make yourselves abominable by beast or by bird or by anything with which the ground teems, which I have set apart for you to hold unclean. Male homosexuality, then, isn’t the only abomination identified in Leviticus. As you compare these passages, looking for similarities and differences, it will become apparent that most of the abominations have to do with dietary rules—specifically those potential foods deemed “unclean.” Other abominations flow from the mishandling of ritual sacrifices. “Dietary Rules” and “Ritual Sacrifices” thus represent additional codes to be used in our analysis. Earlier, I mentioned the death penalty as another concept to be explored in our analysis. When we take this avenue, we discover that many be- 426 CHAPTER 13 QUALITATIVE DATA ANALYSIS haviors besides male homosexuality warrant the death penalty. Among them are these: 20:2 Giving your children to Molech (human sacrifice) 20:9 Cursing your father or mother 20:10 Adultery with your neighbor’s wife 20:11 Adultery with your father’s wife 20:12 Adultery with your daughter-in-law 20:14 Taking a wife and her mother also 20:15 Men having sex with animals (the animals are to be killed, also) 20:16 Women having sex with animals 20:27 Being a medium or wizard 24:16 Blaspheming the name of the Lord 24:17 Killing a man As you can see, the death penalty is broadly applied in Levicitus: everything from swearing to murder, including male homosexuality somewhere in between. An extended analysis of prohibited behavior, short of abomination and death, also turns up a lengthy list. Among them are slander, vengeance, grudges, cursing the deaf, and putting stumbling blocks in front of blind people. In chapter 19, verse 19, Leviticus quotes God as ordering, “You shall not let your cattle breed with a different kind; you shall not sow your field with two kinds of seed; nor shall there come upon you a garment of cloth made of two kinds of stuff.” Shortly thereafter, he adds, “You shall not eat any flesh with the blood in it. You shall not practice augury or witchcraft. You shall not round off the hair on your temples or mar the edges of your beard.” Tattoos were prohibited, though Leviticus is silent on body piercing. References to all of these practices would be coded “Prohibited Acts” and perhaps given additional codes as well (recall “Dietary Rules”). See the box memoing Writing memos that become part of the data for analysis in qualitative research such as grounded theory. Memos can describe and define concepts, deal with methodological issues, or offer initial theoretical formulations. IN THE REAL WORLD SEXUAL ORIENTATION CONTROVERSY A simple qualitative analysis such as the Leviticus example sheds new light on a key civil rights issues in the United States today. People are harassed, discriminated against, and even killed because of their sexual orientation. When GSS respondents are asked for their opinions about homosexuality, “Always wrong” is the most frequent response selected, followed by “Never wrong,” with small minorities choosing more moderate, mixed views. Anti-gay sentiments and actions are often justified on religious grounds, specifically the passages in Leviticus cited in this chapter. But the longer list of abominations in Leviticus was used in the TV series West Wing to debunk the homophobic preaching of a radio talk-jock. “Sexual Orientation Controversy” for more on this sort of coding activity. I hope this brief glimpse into a possible analysis will give you some idea of the process by which codes are generated and applied. You should also have begun to see how such coding would allow you to understand better the messages being put forward in a text and to retrieve data appropriately as you need them. Memoing In the Grounded Theory Method, the coding process involves more than simply categorizing chunks of text. As you code data, you should also be using the technique of memoing—writing memos or notes to yourself and others involved in the project. Some of what you write during analysis may QUALITATIVE DATA PROCESSING end up in your final report; much of it will at least stimulate what you write. In GTM, these memos have a special significance. Strauss and Corbin (1998:217) distinguish three kinds of memos: code notes, theoretical notes, and operational notes. Code notes identify the code labels and their meanings. This is particularly important because, as in all social science research, most of the terms we use with technical meanings also have meanings in everyday language. It’s essential, therefore, to write down a clear account of what you mean by the codes used in your analysis. In the Leviticus analysis, for example, you would want a code note regarding the meaning of “Abomination” and how you’ve used that code in your analysis of text. Theoretical notes cover a variety of topics: reflections of the dimensions and deeper meanings of concepts, relationships among concepts, theoretical propositions, and so on. All of us occasionally ruminate over the nature of something, try to think it out, to make sense out of it. In qualitative data analysis, it’s vital to write down these thoughts, even those you’ll later discard as useless. They will vary greatly in length, but you should limit each to a single main thought so that you can sort and organize them all later. In the Leviticus analysis, one theoretical note might discuss the way that most of the injunctions implicitly address the behavior of men, with women being mostly incidental. Operational notes deal primarily with methodological issues. Some will draw attention to datacollection circumstances that may be relevant to understanding the data later on. Others will consist of notes directing future data collection. These memos are written throughout the datacollection and analysis process. Thoughts demanding memos will come to you as you reread notes or transcripts, code chunks of text, or discuss the project with others. It’s a good idea to get in the habit of writing out your memos as soon as possible after the thoughts come to you. Notice that whereas we often think of writing as a linear process, starting at the beginning and moving through to the conclusion, memoing does 427 not follow this pattern. It might be characterized as a process of creating chaos and then finding order within it. To explore this process further, refer to the works cited in this discussion and at the end of the chapter. You’ll also find a good deal of information on the web. Ultimately, the best education in this process comes from practice. Even if you don’t have a research project under way, you can practice now on class notes. Or start a journal and code it. Concept Mapping It should be clear by now that qualitative data analysts spend a lot of time committing thoughts to paper (or to a computer file) and figuring out how they relate to one another. Often, we can think out relationships among concepts even more clearly by putting the concepts in a graphical format, a process called concept mapping. Some researchers find it useful to put all their major concepts on a single sheet of paper, whereas others spread their thoughts across several sheets of paper, blackboards, magnetic boards, computer pages, or other media. Figure 13-3 shows how we might think out some of the concepts of Goffman’s examination of gender and advertising. (This image was created through the use of Inspiration, a concept-mapping computer program.) Incidentally, many of the topics discussed in this section have useful applications in quantitative as well as qualitative analyses. Certainly, concept mapping is appropriate in both types of analysis. The several types of memos would also be useful in both. And the discussion of coding readily applies to the coding of open-ended questionnaire responses for the purpose of quantification and statistical analysis. (We’ll look at coding again in the next chapter, on quantifying data.) concept mapping The graphical display of concepts and their interrelations, useful in the formulation of theory. 428 CHAPTER 13 QUALITATIVE DATA ANALYSIS Active/ passive roles Physical location Social status Power Gender Authority Servant / master Social worth FIGURE 13-3 An Example of Concept Mapping Having noted the overlap of qualitative and quantitative techniques, it seems fitting now to address an instrument that is primarily associated with quantitative research but that is proving quite valuable for qualitative analysts as well—the personal computer. COMPUTER PROGRAMS FOR QUALITATIVE DATA The advent of computers, both mainframe and personal, has been a boon to quantitative research, allowing the rapid calculation of extremely complex statistics. The importance of the computer for qualitative research has been somewhat more slowly appreciated. Some qualitative researchers were quick to adapt the basic capacities of computers to nonnumerical tasks, but it took a bit longer for programmers to address the specific needs of qualitative research. Today, however, many powerful programs are available. Let’s start this section with a brief overview of some of the ways you can use basic computer tools in qualitative research. Perhaps only those who can recall hours spent with carbon paper and White-out can fully appreciate the glory of computers as a note-taking device. “Easier editing” and “easier duplication” simply do not capture the scope of the advance. Moving beyond the basic recording and storage of data, simple word-processing programs can be used for some data analysis. The “find” or “search” command will take you to passages containing key words. Or, going one step further, you can type code words alongside passages in your notes so that you can search for those keywords later. Database and spreadsheet programs can also be used for processing and analyzing qualitative data. Figure 13-4 offers a simple illustration of how some of the verses from Leviticus might be manipulated within a spreadsheet. The three columns to the left represent three of the concepts we’ve discussed. An “x” means that the passage to the right contains that concept. As shown, the passages are sorted in such a way as to gather all those dealing with punishment by death. Another simple “sort” command would gather all those dealing with sex, with homosexuality, or with any of the other concepts coded. This brief illustration should give you some idea of the possibilities for using readily available programs as tools in qualitative data analysis. Happily, there are now a large number of programs created specifically for that purpose. QDA Programs The simple spreadsheet in Figure 13-4 should give you a basic idea of how computers might be used for the analysis of qualitative social research data. However, there are now a long list of sophisticated computer programs available for this purpose. Where the analyst’s problem used to be merely finding such a program, the problem now lies in choosing one of so many. Here are a few commonly used qualitatative-data analysis (QDA) programs with online sites where you can learn more about them and, in many cases, download demo copies. • Alceste: http://www.image.cict.fr/english/ index_alceste.htm COMPUTER PROGRAMS FOR QUALITATIVE DATA 429 sex homosex death Verse Passage X X X 20:13 If a man lies with a male as with a woman, both of them have committed an abomination; they shall be put to death, their blood is upon them. X X 20:12 If a man lies with his daughter-in-law, both of them shall be put to death; they have committed incest, their blood is upon them. X X 20:15 If a man lies with a beast, he shall be put to death; and you shall kill the beast. X 20:09 For every one who curses his father or his mother shall be put to death; he has cursed his father or his mother, his blood is upon him. X 20:02 Any man of the people of Israel, or of the strangers that sojourn in Israel, who gives any of his children to Molech shall be put to death. 18:22 You shall not lie with a male as with a woman; it is an abomination. X X FIGURE 13-4 Using a Spreadsheet for Qualitative Analysis • • • • • • • • • • • • • • • AnSWR: http://www.cdc.gov/hiv/software/ answr.htm Atlas.ti: http://www.atlasti.com/index.php Ethno 2: http://www.indiana.edu/ %7Esocpsy/ESA/ Ethnograph: http://www.qualisresearch.com/ HyperQual: http://home.satx. rr.com/hyperqual/ HyperResearch: http://www.researchware .com/ HyperTranscribe: http://www.researchware .com/ MAXqda: http://www.maxqda.com/ NUD*IST, NVivo 7: http://www.qsr.com .au/products/productoverview/NVivo_7.htm QDA Miner: http://www.provalisresearch .com/QDAMiner/QDAMinerDesc.html Qualrus: http://www.ideaworks.com/ Qualrus/index.html SPAD: http://eng.spad.eu/ TAMS: http://sourceforge. net/projects/tamsys T-LAB: http://www.tlab.it/en/presentazione .asp Weft: http://www.pressure.to/qda/ There are also some powerful online resources to assist you in choosing the program best suited to your needs. Sociologists at the University of Surrey, England, have prepared an overview of these and other programs with descriptions and contact information. You can find this resource at http://www .soc.surrey.ac.uk/sru/SRU1.html. Another excellent resource is “Choosing a CAQDAS Package” by Ann Lewins and Christina Silver (2006), which can be found at http://caqdas.soc.surrey.ac.uk/. This will familiarize you with some of the key features in such computer programs and help you choose which one is best suited to your purposes. Let’s turn now to a couple of illustrations of QDA programs at work. Although the available programs differ somewhat from one another, I think these illustrations will give you a good sense of the general use of computers to analyze qualitative data. We’ll briefly examine Leviticus, and then we’ll examine a project that used a different program and focused on understanding the experiences of women film directors. Leviticus as Seen through NUD*IST We’ll first consider one of the programs just mentioned, NUD*IST (Nonnumeric Unstructured Data, Index Searching, and Theorizing). This popular program for teaching qualitative social research offers a fair representation of QDA programs. 430 CHAPTER 13 QUALITATIVE DATA ANALYSIS FIGURE 13-5 How Text Materials Are Displayed in NUD*IST Although the text materials to be coded can be typed directly into NUD*IST, usually materials already in existence—such as field notes or, in this case, the verses of Leviticus—are imported into the program. Menu-based commands do this easily, though the text must be in a plaintext format (that is, without word-processing or other formatting). Figure 13-5 shows how the text is displayed within NUD*IST. For the illustrations in this section, I have used the Macintosh version of the program, but the Windows version is similar. To see the document, select its name in the “Document Explorer” window and click “Browse.” The text window can be resized and moved around the screen to suit your taste. Note the set of buttons in the upper right corner of the illustration. These allow you to select portions of the text for purposes of editing, coding, and other operations. Now let’s create a concept code: “homosex.” This will stand for references to male homosexuality. Figure 13-6 shows what the creation of a concept code looks like. COMPUTER PROGRAMS FOR QUALITATIVE DATA 431 FIGURE 13-6 Creating the Code “homosex” As we create codes for our concepts, we can use them to code the text materials. Figure 13-7 illustrates how this is done. In the document browser, you can see that verse 20:13 has been selected (indicated by the box outline around this verse). Having done that, we click the button labeled “Add Coding” (not shown in this illustration). This prompts the computer to ask us to identify the appropriate code. The easiest way to respond is to click the “Browse” button, which presents you with a list of the current codes. In this example, I selected “homosex” and entered the code ID (100). As text materials are coded, the program can then be used for purposes of analysis. As a simple example, we might want to pull together all the passages coded “homosex.” This would allow us to see them all at once, looking for similarities and differences. 432 CHAPTER 13 QUALITATIVE DATA ANALYSIS FIGURE 13-7 Coding a Text Passage Figure 13-8 shows how NUD*IST would bring together the passages referring to male homosexuality. To do this, all you do is select the code name in the “Node Explorer” window and click the “Make Report” button. This simple example illustrates the possibilities opened up by a program designed specifically for qualitative data analysis. Now let’s probe more deeply into the possibilities of computerized qualitative data analysis. Sandrine Zerbib is a French sociologist interested in understanding the special difficulties faced by women breaking into the male-dominated world of film direction. To address this issue, she interviewed 30 women directors in COMPUTER PROGRAMS FOR QUALITATIVE DATA 433 FIGURE 13-8 Reporting on “homosex” depth. Having compiled hours of recorded interviews, she turned to a popular program, NVivo (a successor to NUD*IST), as a vehicle for analysis. In the next section, she directly describes her experiences with the ongoing process of qualitative data analysis. Using NVivo to Understand Women Film Directors, by Sandrine Zerbib For those of you who feel uncomfortable using new programs or computer programs in general, NVivo should work well. It is visually clear and intuitive, and it requires mostly dragging (moving text or objects using a mouse). 434 CHAPTER 13 QUALITATIVE DATA ANALYSIS To learn more about the tools in this software package, let’s look at a project file I created using NVivo. Figure 13-9 shows the opened browser window containing my interview with Berta, one of the 30 film directors I interviewed. The “Coding Stripes” view allows you to visualize “nodes” (i.e., codes) associated with the text. Parts of the same passage were coded with more than one node, which explains the overlapping of “stripes” or brackets. The “Coder” window is opened on the right. You can click on the 0003 symbol to visualize “child” nodes (or subnodes) of a particular “tree” node (main node). As you can see in this figure, when you click on the “first job in the industry” node located in the coder window, the passages associated with that node are automatically highlighted. In Figure 13-10, a new interview has been imported as a “Rich Text Format” file into the project. You can either import a file with a particular formatting of headings and font styles or make these formatting changes while using NVivo. Using styles and other text-formatting tools can help you refine your coding system. For instance, it can be very useful to create styles that format the interviewer’s and the interviewee’s narratives differently; this way, you can see at a glance which type of narrative you’re reading. Of course, taking the time to format makes more sense in some cases than in others. For instance, I found it extremely useful to use formatting in projects that comprise a large number of interviews. As you explore possibilities, you’ll find certain formatting choices more helpful than others. You’ll want to consider such things as how much a change in font or typeface expands or contracts your text, and how easy or hard a given format is to read. For example, you might increase the size of or use boldface for the interviewer’s text rather than the interviewee’s text, because the interviewer speaks far less than the interviewee and you do not want your text to be too long. In Figure 13-10 I chose to italicize my part of the interview. Another important feature of NVivo is that it allows you to attach passages to nodes easily. You can select passages based on content in units as small as a single word, then drag them where you want them. In Figure 13-10, I highlighted part of a paragraph, opened the “career experience” node, and simply dragged the passage to the child node “early artist” located in the coder within the “career experience” category. Another helpful tool is the “attribute” function of NVivo. By clicking on the colorful cube icon, you can open the attribute browser any time for any of your interviews or other texts. This allows you to begin a content analysis by creating attributes (or variables) such as age, sex, date of interview, number of children, and so on. There is no rule as to how many attributes you should create, but you need to weigh the time spent on choosing attributes against the potential usefulness of those attributes. Because values previously created are automatically presented as possible choices, it’s easy to keep your names and definitions of attributes consistent as you move from text to text. For instance, in Figure 13-11, I had typed “Los Angeles” under the category “Live city” (defined as “Where does the interviewee live?”) while coding my interview with Ulma. The value “Los Angeles” was later automatically available when I coded my interview with Berta. You can see that using “LA” in one instance and “Los Angeles” in another might have caused problems. The “attribute” function of NVivo can also help you generate an interview profile or a quantitative analysis. To organize your attributes, you can create either “free” nodes or “tree” nodes. Free nodes are independent nodes, which means that you can’t organize them in any type of hierarchy or structure. They are generally those nodes that cannot be related to others. Tree nodes, as their name suggests, can be organized into a hierarchy. You can create “child” nodes and “sibling” nodes in relation to them. Tree nodes are the most helpful for analysis purposes. To create a tree node, click on “trees” next to the green trees symbol and then click on the right button of your mouse. A window like the one in Figure 13-12 will appear. Next, click on “create” and choose either “child node” or “sibling node.” In Figure 13-12, I had created a node called “abuse” and needed to create a child node called “school abuse,” because my interviewee was telling me about the abuse she had experienced at school. I created a child node and then typed “school abuse” FIGURE 13-9 Viewing the Interview with Berta FIGURE 13-10 Example of Formatted Text and Attaching Passages to Nodes FIGURE 13-11 Using the “Attribute” Function FIGURE 13-12 Creating a Tree Node COMPUTER PROGRAMS FOR QUALITATIVE DATA 437 FIGURE 13-13 Adding a Description over the default “tree node” just created under the “abuse” node. If Ulma reported abuse experienced at home, I could have created another child node called “domestic abuse” under “abuse” or simply a sibling node to “school abuse.” Again, I can select any passage from the text and drag it to any number of nodes. When creating new nodes, be sure to attach a description to each. It’s easy to forget what you originally meant if you don’t write it down. You can define each node with the “properties” command. With this feature, you can also keep track of the time you created a particular node and who created it in case you’re working with other coders. In Figure 13-13, I added the description “Mental and physical abuse inflicted by school authority” to the “school abuse” node. You can modify the properties of the nodes you create and the documents you import at any time. Be aware, however, that NVivo is not conceived to merge project files. If you plan to have several people coding the same data, each will have to work on the same NVivo project file at different times instead of simultaneously; that is, they will not be able to combine separate files at the end. If this is an issue, N5 (another successor to NUD*IST) is an alternative program that allows files to be merged. Coding can be tedious and time consuming. However, the analysis it allows may be priceless. You can use NVivo for generating reports for all or specific nodes or texts. For instance, in Figure 13-14 I inquired about all interviewee narratives that were coded under the “gender discrimination” node. As you can see, all passages are extracted under the “gender discrimination” node browser. Each interview name is specified, as well as the size of each passage. From this window, you could turn on the coding stripe view and see how each passage is associated with nodes other than “gender discrimination.” You could also do further coding from this window by simply dragging selected text to a node. Finally, you could generate a report by attribute; for example, you could get all passages that have to do with “gender discrimination” for women who live in Los Angeles or New York and compare them. 438 CHAPTER 13 QUALITATIVE DATA ANALYSIS FIGURE 13-14 Extracting Materials by Node THE QUALITATIVE ANALYSIS OF QUANTITATIVE DATA Although it is important and appropriate to distinguish between qualitative and quantitative research and to discuss them separately, they are by no means incompatible or in competition. You need to operate in both modes to explore your full potential as a social researcher. Chapter 14 explores some ways in which quantitative analyses can strengthen qualitative studies. Before we move on, however, let’s look at an example of how quantitative data demand qualitative assessment. Figure 13-15 presents FBI data on homicides committed in the United States. These data are often presented in a tabular form, but notice how clearly the patterns of crime appear in this three- dimensional graph. Even though the graph is based on statistical data, it conveys its meaning quite clearly. Although summarizing it in the form of equations may be useful for certain purposes, it would add nothing to the clarity of the picture itself. Thus, the qualitative assessment of the graph clarifies the quantitative data in a way that no other representation could. Here’s a case where a picture is truly worth a thousand words. ETHICS AND QUALITATIVE DATA ANALYSIS At least two ethical issues raise special concern in the analysis and reporting of qualitative research. First, because such analysis calls so directly on subjective judgments, there is an obvious risk of seeing what you are looking for or want to find. MAIN POINTS 439 WHAT DO YOU THINK? REVISITED 800 700 500 400 300 0 20 40 60 ge 200 0 er a 100 20 40 Victim 80 60 age 80 100 100 Off end Number 600 FIGURE 13-15 Number of One-on-One Homicides by Age of Victim and Age of Offender, Raw Data Source: Michael D. Maltz, “Visualizing Homicide: A Research Note,” Journal of Quantitative Criminology 15, no. 4 (1998): 401. The risk increases in the case of participatory action research or other projects involving an element of social justice. Researcher bias is hardly an inevitable outcome, however. Experienced qualitative analysts avoid this pitfall in at least two ways: by cultivating a deliberate awareness of their own values and preferences, and by adhering to established techniques for data collection and analysis. And as an additional protection, the peer-review process in scientific research encourages colleagues to point out any failings in this regard. Second, qualitative research makes protecting subjects’ privacy particularly important. The qualitative researcher will often analyze and report data collected from identifiable individuals. Throughout the book, I’ve indicated the importance of not revealing what we learn about subjects, as in the case of data collection. When writing up the re- Main Points Introduction ❏ Qualitative analysis is the nonnumerical examination and interpretation of observations. Quantification requires a simplification of data through a loss of detail. Sometimes those details are critical to understanding the “whole picture.” You’ve experienced this if you’ve ever found yourself being categorized by someone else. Let’s say you express some political opinion. Someone then asks what your major is, and you reply, “Sociology.” Then that same person says, “Well, of course!”—implying that they now “know” a long list of things about you—some true, some false—that will now shape their “understanding” of the political opinion you expressed. You may have experienced being similarly categorized in terms of your religion, race, place of birth, or gender. A similar loss can occur in the quantification of data, where a limited number of categories takes the place of varied details. Qualitative analysis, while coding and categorizing, aims at staying closer to the original details. sults of your analyses, you will often need to make concerted efforts to conceal identities. Individuals, organizations, and communities are often given pseudonyms toward this end. Sometimes, you may need to suppress details that would let outsiders figure out who you are talking about. Thus, it may be appropriate to speak about interviewing “a church leader” rather than “the head deacon.” You may also need to suppress or alter age, race, or gender references if that would give away a subject’s identity. The key principle is to respect the privacy of those we study. Linking Theory and Analysis ❏ Qualitative analysis involves a continual interplay between theory and analysis. In analyzing qualitative data, we seek to discover patterns 440 CHAPTER 13 QUALITATIVE DATA ANALYSIS such as changes over time or possible causal links between variables. ❏ Examples of approaches to the discovery and explanation of such patterns are Grounded Theory Method (GTM), semiotics, and conversation analysis (CA). Qualitative Data Processing ❏ The processing of qualitative data is as much art as science. Three key tools for preparing data for analysis are coding, memoing, and concept mapping. Key Terms axial coding memoing case-oriented analysis open coding concept mapping qualitative analysis constant comparative method selective coding conversation analysis (CA) semiotics cross-case analysis variable-oriented Grounded Theory Method (GTM) analysis Review Questions In contrast to the standardized units used in coding for statistical analyses, the units to be coded in qualitative analyses may vary within a document. Although codes may be derived from the theory being explored, more often researchers use open coding, in which codes are suggested by the researchers’ examination and questioning of the data. 1. Review Goffman’s examination of gender advertising, and collect and analyze a set of advertising photos from magazines or newspapers. What is the relationship between gender and status in the materials you found? ❏ Memoing is appropriate at several stages of data processing and serves to capture code meanings, theoretical ideas, preliminary conclusions, and other thoughts that will be useful during analysis. ❏ Concept mapping uses diagrams to explore relationships in the data graphically. 3. Imagine you were conducting a cross-case analysis of revolutionary documents such as the Declaration of Independence and the Declaration of the Rights of Man and of the Citizen (from the French Revolution). What key concepts might you code in the following sentence? ❏ Computer Programs for Qualitative Data ❏ Many computer programs, such as NUD*IST and NVivo, are specifically designed to assist researchers in the analysis of qualitative data. The Qualitative Analysis of Quantitative Data ❏ Researchers need both qualitative and quantitative analysis for the fullest understanding of social science data. Ethics and Qualitative Data Analysis ❏ The subjective element in qualitative data analysis provides an added challenge to avoiding bias in the interpretation of data. ❏ Because the qualitative data analyst knows the identity of subjects, taking special steps to protect their privacy is crucial. 2. Review the discussion of homosexuality in the Book of Leviticus. How might the examination be structured as a cross-case analysis? When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the Powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation. 4. Go to the World Press Review online (http:// www.wpr.com) and pick a controversial news topic discussed by several newspapers. See if you can identify characteristics of those newspapers (such as political leaning, region) that might explain the different points of view expressed on the topic. ADDITIONAL READINGS Online Study Resources 441 aspects of qualitative research, in both theory and practice. Glaser, Barney G., and Anselm L. Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Aldine. This is the classic statement of grounded theory, with practical suggestions Go to that are still useful today. http://sociology.wadsworth.com/babbie_basics4e and click on ThomsonNow for access to this powerful online study tool. You will get a personalized study plan based on your responses to a diagnostic pretest. Once you have mastered the material with the help of interactive learning tools, you can take a posttest to confirm that you are ready to move on to the next chapter. Hutchby, Ian, and Robin Wooffitt. 1998. Conversation Analysis: Principles, Practices and Applications. Cambridge, England: Polity Press. An excellent overview of the conversation analysis method. The book examines the theory behind the technique, how to use it, and some possible applications. Jacobson, David. 1999. “Doing Research in Cyberspace.” Field Methods 11:127–45. The use of the Internet for social research is not limited to surveys and experiments, as Jacobson demonstrates in this examination of computer-mediated communication (CMC). Website for The Basics of Social Research 4th edition King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press. This controversial book by three political At the book companion website (http://sociology .wadsworth.com/babbie_basics4e) you will find many resources in addition to ThomsonNow to aid you in studying for your exams. For example, you will find Tutorial Quizzes with feedback, Internet Exercises, Flashcards, and Chapter Tutorials, as well as Extended Projects, InfoTrac College Edition search terms, Social Research in Cyberspace, GSS Data, Web Links, and primers for using various data analysis software such as SPSS and NVivo. scientists seeks to bring the logic of causal, quantitative analysis to bear on qualitative data. Their stated intention is to unify the two approaches. Lewins, Ann, and Christina Silver. 2006. “Choosing a CAQDAS Package.” July. http://caqdas.soc.surrey .ac.uk/. This excellent working paper gives a detailed examination of the features to look for in a qualitative data analysis program and discusses some of the more popular programs available. McCormack, Coralie. 2004. “Storying Stories: A Narrative Approach to In-Depth Interview Conversations.” International Journal of Social Research Methodology 7 (3): 219–36. The in-depth interviews common to Additional Readings qualitative field research can result in lengthy narrative accounts that can pose daunting challenges for Berg, Bruce. 1998. Qualitative Research Methods for the Social Sciences. Boston: Allyn and Bacon. Here’s a comprehensive and readable review of the techniques for collecting and analyzing qualitative data, with a special sensitivity to research ethics. Denzin, Norman K., and Yvonna S. Lincoln, eds. 1994. analysts. This article details a set of procedures for organizing the analysis of such stories, with a special concern for the ethical dimension. Strauss, Anselm, and Juliet Corbin. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, CA: Handbook of Qualitative Research. Thousand Oaks, Sage. This updated statement of grounded theory of- CA: Sage. Here’s a rich resource covering many fers special guidance on coding and memoing. 141 HUMAN QUANTITATIVE INQUIRY DATA ANDANALYSIS SCIENCE Photo credit Image not available due to copyright restrictions This Chapter Chapter What You’ll Learn in this Often, datathe areway converted numerical form world for statistical this We’llsocial examine peopleto learn about their and theanalyses. mistakes In they chapter, we’ll begin the process data, sience then turn to analysis. make along the way.with We’ll also beginof toquantifying see what makes different from Quantitative may be descriptive or explanatory; it may involve one, two, other ways ofanalysis knowing things. or several variables. We begin our examination of quantitative analyses with some simple but powerful ways of manipulating data in order to attain research conclusions. 2S 1S N L 442 In this chapter . . . WHAT DO YOU THINK? Introduction Quantification of Data Developing Code Categories Codebook Construction Data Entry Univariate Analysis Distributions Central Tendency Dispersion Continuous and Discrete Variables Detail versus Manageability Subgroup Comparisons “Collapsing” Response Categories Handling Don’t Knows Numerical Descriptions in Qualitative Research Bivariate Analysis Percentaging a Table Constructing and Reading Bivariate Tables Introduction to Multivariate Analysis Sociological Diagnostics Ethics and Quantitative Data Analysis INTRODUCTION In Chapter 13, we saw some of the logic and techniques by which social researchers analyze the qualitative data they have collected. This chapter will examine quantitative analysis, or the techniques by which researchers convert data to a numerical form and subject it to statistical analyses. quantitative analysis The numerical representation and manipulation of observations for the purpose of describing and explaining the phenomena that those observations reflect. In Chapter 13, we saw several inherent shortcomings in data. Image not available due to copyright restrictions quantitative These shortcomings centered primarily on standardization and superficiality in the face of a social reality that is varied and deep. Can anything meaningful be learned from data that sacrifice meaningful detail in order to permit numerical manipulations? See the “What Do You Think? Revisited” box toward the end of the chapter. To begin, we’ll look at quantification—the process of converting data to a numerical format. This involves converting social science data into a machine-readable form—a form that can be read and manipulated by computers and similar machines used in quantitative analysis. The rest of the chapter will present the logic and some of the techniques of quantitative data analysis—starting with the simplest case, univariate analysis, which involves one variable, then discussing bivariate analysis, which involves two variables. We’ll move on to a brief introduction to multivariate analysis, or the examination of several variables simultaneously, such as age, education, and prejudice, and then we’ll end with a discussion of sociological diagnostics. Before we can do any sort of analysis, we need to quantify our data. Let’s turn now to the basic steps involved in converting data into machinereadable forms amenable to computer processing and analysis. 443 CHAPTER 14 QUANTITATIVE DATA ANALYSIS Aaron Babbie 444 Some students take to statistics more readily than others. QUANTIFICATION OF DATA Today, quantitative analysis is almost always done by computer programs such as SPSS and MicroCase. For those programs to work their magic, they must be able to read the data you’ve collected in your research. If you’ve conducted a survey, for example, some of your data are inherently numerical: age or income, for instance. Whereas the writing and check marks on a questionnaire are qualitative in nature, a scribbled age is easily converted to quantitative data. Other data are also easily quantified: Transforming male and female into “1” and “2” is hardly rocket science. Researchers can also easily assign numerical representations to such variables as religious affiliation, political party, and region of the country. Some data are more challenging, however. If a survey respondent tells you that he or she thinks the biggest problem facing Woodbury, Vermont, is “the disintegrating ozone layer,” the computer can’t process that response numerically. You must translate by coding the responses. We’ve already discussed coding in connection with content analysis (Chapter 11) and again in connection with qualitative data analysis (Chapter 13). Now we look at coding specifically for quantitative analysis, which differs from the other two primarily in its goal of converting raw data into numbers. As with content analysis, the task of quantitative coding is to reduce a wide variety of idiosyncratic items of information to a more limited set of attributes composing a variable. Suppose, for example, that a survey researcher asks respondents, “What is your occupation?” The responses to such a question will vary considerably. Although it will be possible to assign each reported occupation a separate numerical code, this procedure will not facilitate analysis, which typically depends on several subjects having the same attribute. The variable occupation has many preestablished coding schemes. One such scheme distinguishes professional and managerial occupations, clerical occupations, semiskilled occupations, and so forth. Another scheme distinguishes different sectors of the economy: manufacturing, health, education, commerce, and so forth. Still others combine both of these schemes. Using an established coding scheme gives you the advantage of being able to compare your research results with those of other studies. To learn more about preestablished coding schemes, visit the Bureau of Labor Statistics* to learn about their Standard Occupational Classification: http://stats.bls .gov/soc/soc_majo.htm. The occupational coding scheme you choose should be appropriate to the theoretical concepts being examined in your study. For some studies, coding all occupations as either white-collar or blue-collar might suffice. For others, self-employed and not self-employed might do. Or a peace researcher might wish to know only whether the occupation depended on the defense establishment or not. Although the coding scheme should be tailored to meet particular requirements of the analysis, you should keep one general guideline in mind. If the data are coded to maintain a great deal of detail, *Each time the Internet icon appears, you’ll be given helpful leads for searching the World Wide Web. QUANTIFICATION OF DATA code categories can always be combined during an analysis that does not require such detail. If the data are coded into relatively few, gross categories, however, you’ll have no way during analysis to recreate the original detail. To keep your options open, it’s a good idea to code your data in greater detail than you plan to use in the analysis. Developing Code Categories There are two basic approaches to the coding process. First, you may begin with a relatively welldeveloped coding scheme, derived from your research purpose. Thus, as suggested previously, the peace researcher might code occupations in terms of their relationship to the defense establishment. Or, you may want to use an existing coding scheme so that you can compare your findings with those of previous research. The alternative method is to generate codes from your data, as discussed in Chapter 13. Let’s say we’ve asked students in a self-administered campus survey to say what they believe is the biggest problem facing their college today. Here are a few of the answers they might have written in. Tuition is too high Not enough parking spaces Faculty don’t know what they are doing Advisors are never available Not enough classes offered Cockroaches in the dorms Too many requirements Cafeteria food is infected Books cost too much Not enough financial aid Take a minute to review these responses and see whether you can identify some categories represented. Realize that there is no right answer; several coding schemes might be generated from these answers. Let’s start with the first response: “Tuition is too high.” What general areas of concern does that response reflect? One obvious possibility is “Financial Concerns.” Are there other responses that would fit into that category? Table 14-1 shows which of the questionnaire responses could fit. 445 TABLE 14-1 Student Responses That Can Be Coded “Financial Concerns” Financial Concerns Tuition is too high Not enough parking spaces Faculty don’t know what they are doing Advisors are never available Not enough classes offered Cockroaches in the dorms Too many requirements Cafeteria food is infected Books cost too much Not enough financial aid X X X In more general terms, the first answer can also be seen as reflecting nonacademic concerns. This categorization would be relevant if your research interest included the distinction between academic and nonacademic concerns. If that were the case, the responses might be coded as shown in Table 14-2. Notice that I didn’t code the response “Books cost too much” in Table 14-2, because this concern could be seen as representing both of the categories. Books are part of the academic program, but their cost is not. This signals the need to refine the coding scheme we’re developing. Depending on our research purpose, we might be especially interested in identifying any problems that had an academic element; hence we’d code this one “Academic.” Just as reasonably, however, we might be more interested in identifying nonacademic problems and would code the response accordingly. Or, as another alternative, we might create a separate category for responses that involved both academic and nonacademic matters. As yet another alternative, we might want to separate nonacademic concerns into those involving administrative matters and those dealing with campus facilities. Table 14-3 shows how the first ten responses would be coded in that event. As these few examples illustrate, there are many possible schemes for coding a set of data. Your choices should match your research pur- 446 CHAPTER 14 QUANTITATIVE DATA ANALYSIS TABLE 14-2 Student Concerns Coded as “Academic” and “Nonacademic” Academic Tuition is too high Not enough parking spaces Faculty don’t know what they are doing Advisors are never available Not enough classes offered Cockroaches in the dorms Too many requirements Cafeteria food is infected Books cost too much Not enough financial aid TABLE 14-3 Nonacademic X X X X X X X X X X Nonacademic Concerns Coded as “Administrative” or “Facilities” Academic Tuition is too high Not enough parking spaces Faculty don’t know what they are doing Advisors are never available Not enough classes offered Cockroaches in the dorms Too many requirements Cafeteria food is infected Books cost too much Not enough financial aid poses and reflect the logic that emerges from the data themselves. Often, you’ll find yourself modifying the code categories as the coding process proceeds. Whenever you change the list of categories, however, you must review the data already coded to see whether changes are in order. Like the set of attributes composing a variable, and like the response categories in a closedended questionnaire item, code categories should be both exhaustive and mutually exclusive. Every piece of information being coded should fit into one and only one category. Problems arise whenever a given response appears to fit equally into more than one code category or whenever it fits into no category: Both signal a mismatch between your data and your coding scheme. Administrative Facilities X X X X X X X X X X If you’re fortunate enough to have assistance in the coding process, you’ll need to train your coders in the definitions of code categories and show them how to use those categories properly. To do so, explain the meaning of the code categories and give several examples of each. To make sure your coders fully understand what you have in mind, code several cases ahead of time. Then ask your coders to code the same cases without knowing how you coded them. Finally, compare your coders’ work with your own. Any discrepancies will indicate an imperfect communication of your coding scheme to your coders. Even with perfect agreement between you and your coders, however, it’s best to check the coding of at least a portion of the cases throughout the coding process. QUANTIFICATION OF DATA POLVIEWS We hear a lot of talk these days about liberals and conservatives. I’m going to show you a seven-point scale on which the political views that people might hold are arranged from extremely liberal—point 1—to extremely conservative— point 7. Where would you place yourself on this scale? 1. 2. 3. 4. 5. 6. 7. 8. 9. Extremely liberal Liberal Slightly liberal Moderate, middle of the road Slightly conservative Conservative Extremely conservative Don’t know No answer 447 ATTEND How often do you attend religious services? 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. Never Less than once a year About once or twice a year Several times a year About once a month 2–3 times a month Nearly every week Every week Several times a week Don’t know, No answer FIGURE 14-1 A Partial Codebook If you’re not fortunate enough to have assistance in coding, you should still obtain some verification of your own reliability as a coder. Nobody’s perfect, especially a researcher hot on the trail of a finding. Suppose that you’re studying an emerging cult and that you have the impression that people who do not have a regular family will be the most likely to regard the new cult as a family substitute. The danger is that whenever you discover a subject who reports no family, you’ll unconsciously try to find some evidence in the subject’s comments that the cult is a substitute for family. If at all possible, then, get someone else to code some of your cases to see whether that person makes the same assignments you made. Codebook Construction The end product of the coding process in quantitative analysis is the conversion of data items into numerical codes. These codes represent attributes composing variables, which, in turn, are assigned locations within a data file. A codebook is a document that describes the locations of variables and lists the assignments of codes to the attributes composing those variables. A codebook serves two essential functions. First, it is the primary guide used in the coding process. Second, it is your guide for locating variables and interpreting codes in your data file during analysis. If you decide to correlate two variables as a part of your analysis of your data, the codebook tells you where to find the variables and what the codes represent. Figure 14-1 is a partial codebook created from two variables from the General Social Survey. Though there is no one right format for a codebook, this example presents some of the common elements. Notice first that each variable is identified by an abbreviated variable name: POLVIEWS, ATTEND. We can determine the religious service attendance of respondents, for example, by referencing ATTEND. This example uses the format established by the General Social Survey, which has been carried over into SPSS. Other data sets and/or analysis programs might format variables differently. Some use numerical codes in place of abbreviated names, for example. You must, however, have some identifier that will allow you to locate and use the variable in question. codebook The document used in data processing and analysis that tells the location of different data items in a data file. Typically, the codebook identifies the locations of data items and the meaning of the codes used to represent different attributes of variables. 448 CHAPTER 14 QUANTITATIVE DATA ANALYSIS Next, every codebook should contain the full definition of the variable. In the case of a questionnaire, the definition consists of the exact wordings of the questions asked, because, as we’ve seen, the wording of questions strongly influences the answers returned. In the case of POLVIEWS, you know that respondents were given the several political categories and asked to pick the one that best fit them. The codebook also indicates the attributes composing each variable. In POLVIEWS, for example, the political categories just mentioned serve as these attributes: “Extremely liberal,” “Liberal,” “Slightly liberal,” and so forth. Finally, notice that each attribute also has a numeric label. Thus, in POLVIEWS, “Extremely liberal” is code category 1. These numeric codes are used in various manipulations of the data. For example, you might decide to combine categories 1 through 3 (all the “liberal” responses). It’s easier to do this with code numbers than with lengthy names. You can visit the GSS codebook online at http://webapp.icpsr.umich.edu/GSS/. If you know the symbolic name (e.g., POLVIEWS), you can locate it in the Mnemonic listing. Otherwise, you can browse the “Index by Subject” to find all the different questions that have been asked regarding a particular topic. Data Entry In addition to transforming data into quantitative form, researchers interested in quantitative analysis also need to convert data into a machinereadable format, so that computers can read and manipulate the data. There are many ways of ac- univariate analysis The analysis of a single variable, for purposes of description. Frequency distributions, averages, and measures of dispersion are examples of univariate analysis, as distinguished from bivariate and multivariate analysis. complishing this step, depending on the original form of your data and also the computer program you’ll use for analyzing the data. I’ll simply introduce you to the process here. If you find yourself undertaking this task, you should be able to tailor your work to the particular data source and program you’re using. If your data have been collected by questionnaire, you might do your coding on the questionnaire itself. Then, data-entry specialists (including yourself) could enter the data into, say, an SPSS data matrix or into an Excel spreadsheet that would later be imported into SPSS. Sometimes, social researchers use optical scan sheets for data collection. These sheets can be fed into machines that will convert the black marks into data, which can be imported into the analysis program. This procedure only works with subjects who are comfortable using such sheets, and it’s usually limited to closed-ended questions. Sometimes, data entry occurs in the process of data collection. In Computer Assisted Telephone Interviewing, for example, the interviewer keys responses directly into the computer, where the data are compiled for analysis (see Chapter 9). Even more effortlessly, online surveys can be constructed so that the respondents enter their own answers directly into the accumulating database, without the need for an intervening interviewer or data-entry person. Once data have been fully quantified and entered into the computer, researchers can begin quantitative analysis. Let’s look at the three cases mentioned at the start of this chapter: univariate, bivariate, and multivariate analyses. UNIVARIATE ANALYSIS The simplest form of quantitative analysis, univariate analysis, involves describing a case in terms of a single variable—specifically, the distribution of attributes that compose it. For example, if sex were measured, we would look at how many of the subjects were men and how many were women. UNIVARIATE ANALYSIS Distributions The most basic format for presenting univariate data is to report all individual cases, that is, to list the attribute for each case under study in terms of the variable in question. Let’s take as an example the General Social Survey (GSS) data on attendance at religious services, ATTEND. Table 14-4 presents the results of an SPSS analysis of this variable. Let’s examine the table, piece by piece. First, if you look near the bottom of the table, you’ll see that the sample being analyzed has a total of 2,812 cases. In the last row above the totals, you’ll see that 11 of the 2,812 respondents either said they didn’t know (DK) or gave no answer (NA) in response to this question. So our assessment of U.S. attendance at religious services in 2004 will be based on the 2,801 respondents who answered the question. Go back to the top of the table now. You’ll see that 471 people said they never went to religious services. This number in and of itself tells us nothing about religious practices. It does not, in itself, gives us an idea of whether the “average American” attends religious services a little or a lot. TABLE 14-4 Attend 449 By analogy, suppose your best friend tells you that she drank a six-pack of beer. Is that a little beer or a lot? The answer, of course, depends on whether she consumed the beer in a month, a week, a day, or an hour. In the case of religious participation, similarly, we need some basis for assessing the number that represents the people who never attend religious services. One way to assess the number is to calculate the percentage of all respondents who said they never go to religious services. If you were to divide 471 by the 2,801 who gave some answer, you would get 16.8 percent, which appears in the table as the “Valid Percent.” Now we can say that about 17 percent, or roughly one U.S. adult in six, reports never attending religious services. This result is more meaningful, but does it suggest that people in the United States are generally nonreligious? A further look at Table 14-4 shows that the response category most often chosen was “Every Week,” with 18.1 percent of the respondents giving that answer. Add to that the 8.6 percent who report attending religious services more than once a week, and we find that over a fourth (26.7 percent) of U.S. adults say they attend religious services at least once a week. As you can GSS Attendance at Religious Services, 2004 How Often R Attends Religious Services Value Label NEVER LT ONCE A YEAR ONCE A YEAR SEVRL TIMES A YR ONCE A MONTH 2–3X A MONTH NRLY EVERY WEEK EVERY WEEK MORE THN ONCE WK DK,NA Valid cases Value 0 1 2 3 4 5 6 7 8 9 Total 2,812 Frequency 471 198 396 371 191 255 169 508 242 11 _____ 2,812 Missing cases Source: General Social Survey, 2004, National Opinion Research Center. Percent 16.7 7.0 14.1 13.2 6.8 9.1 6.0 18.1 8.6 0.4 _____ 100.0 11 Valid Percent Cum Percent 16.8 7.1 14.1 13.2 6.8 9.1 6.0 18.1 8.6 16.8 23.9 38.0 51.3 58.1 67.2 73.2 91.4 100.0 _____ 99.8 450 CHAPTER 14 QUANTITATIVE DATA ANALYSIS 20 Percent 15 10 5 0 Never Less than once a year Once a year Several times a year Once a month 2–3 times a month Nearly every week Every week More than once a week How often R attends religious services FIGURE 14-2 Bar Chart of GSS ATTEND, 2004 see, each new comparison gives a more complete picture of the data. A description of the number of times that the various attributes of a variable are observed in a sample is called a frequency distribution. Sometimes it’s easiest to see a frequency distribution in a graph. Figure 14-2 was created by SPSS from the GSS data on ATTEND. The vertical scale on the left side of the graph indicates the percentages selecting each of the answers displayed along the horizontal axis of the graph. Take a minute to notice how the percentages in Table 14-4 correspond to the heights of the bars in Figure 14-2. frequency distribution A description of the number of times the various attributes of a variable are observed in a sample. The report that 53 percent of a sample were men and 47 percent were women would be a simple example of a frequency distribution. average An ambiguous term generally suggesting typical or normal—a central tendency. The mean, median, and mode are specific examples of mathematical averages. mean An average computed by summing the values of several observations and dividing by the number of observations. If you now have a grade point average of 4.0 based on 10 courses, and you get an F in this course, your new grade point (mean) average will be 3.6. Central Tendency Beyond simply reporting the overall distribution of values, sometimes called the marginal frequencies or just the marginals, you may choose to present your data in the form of an average, or measure of central tendency. You’re already familiar with the concept of central tendency from the many kinds of averages you use in everyday life to express the “typical” value of a variable. For instance, in baseball a batting average of .300 says that a batter gets a hit three out of every ten opportunities on average. Over the course of a season, a hitter might go through extended periods without getting any hits at all and go through other periods when he or she gets a bunch of hits all at once. Over time, though, the central tendency of the batter’s performance can be expressed as getting three hits in every ten chances. Similarly, your grade point average expresses the “typical” value of all your grades taken together, even though some of them might be A’s, others B’s, and one or two might be C’s (I know you never get anything lower than a C). Averages like these are more properly called the arithmetic mean (the result of dividing the sum of the values by the total number of cases). The mean is only one way to measure central ten- UNIVARIATE ANALYSIS dency or “typical” values. Two other options are the mode (the most frequently occurring attribute) and the median (the middle attribute in the ranked distribution of observed attributes). Here’s how the three averages would be calculated from a set of data. Suppose you’re conducting an experiment that involves teenagers as subjects. They range in age from 13 to 19, as indicated in the following table: Age Number 13 14 15 16 17 18 19 3 4 6 8 4 3 3 Now that you’ve seen the actual ages of the 31 subjects, how old would you say they are in general, or “on average”? Let’s look at three different ways you might answer that question. The easiest average to calculate is the mode, the most frequent value. As you can see, there were more 16-year-olds (eight of them) than any other age, so the modal age is 16, as indicated in Figure 14-3. Technically, the modal age is the category “16,” which may include some people who are closer to 17 than 16 but who haven’t yet reached that birthday. Figure 14-3 also demonstrates the calculation of the mean. There are three steps: (1) multiply each age by the number of subjects who have that age, (2) total the results of all those multiplications, and (3) divide that total by the number of subjects. In the case of age, a special adjustment is needed. As indicated in the discussion of the mode, those who call themselves “13” actually range from exactly 13 years old to those just short of 14. It is reasonable to assume, moreover, that as a group the “13-year-olds” in the country are evenly distributed within that one-year span, making their average age 13.5 years. This is true for 451 each of the age groups. Hence, it is appropriate to add 0.5 years to the final calculation, making the mean age 16.37, as indicated in Figure 14-3. The third measure of central tendency, the median, represents the “middle” value: Half are above it, half below. If we had the precise ages of each subject (for example, 17 years and 124 days), we’d be able to arrange all 31 subjects in order by age, and the median for the whole group would be the age of the middle subject. As you can see, however, we do not know precise ages; our data constitute “grouped data” in this regard. For example, three people who are not precisely the same age have been grouped in the category “13-year-olds.” Figure 14-3 illustrates the logic of calculating a median for grouped data. Because there are 31 subjects altogether, the “middle” subject would be subject number 16 if they were arranged by age— 15 teenagers would be younger and 15 older. Look at the bottom portion of Figure 14-3, and you’ll see that the middle person is one of the eight 16-yearolds. In the enlarged view of that group, we see that number 16 is the third from the left. Because we do not know the precise ages of the subjects in this group, the statistical convention here is to assume they are evenly spread along the width of the group. In this instance, the possible ages of the subjects go from 16 years and no days to 16 years and 364 days. Strictly speaking, the range, then, is 364/365 days. As a practical matter, it’s sufficient to call it one year. If the eight subjects in this group were evenly spread from one limit to the other, they would be one-eighth of a year apart from each other—a 0.125-year interval. Look at the illustration and you’ll see that if we place the first subject half the mode An average representing the most frequently observed value or attribute. If a sample contains 1,000 Protestants, 275 Catholics, and 33 Jews, “Protestant” is the modal category. median An average representing the value of the “middle” case in a rank-ordered set of observations. If the ages of five men are 16, 17, 20, 54, and 88, the median would be 20. (The mean would be 39.) Age Number 13 14 15 Mode = 16 Most frequent 16 17 18 19 Age Number 13 13 × 3 = 39 14 14 × 4 = 56 15 15 × 6 = 90 16 16 × 8 = 128 17 17 × 4 = 68 18 18 × 3 = 54 19 19 × 3 = 57 ÷ 31 = 15.87 + 0.50 = 16.37 492 (Total) (Cases) Age Number Mean = 16.37 Arithmetic average 13 1–3 14 4–7 15 8–13 14 15 16 17 18 19 20 21 17 22–25 16.06 16.19 16.31 16.44 16.56 16.69 16.81 16.94 18 26–28 19 29–31 Median = 16.31 Midpoint 16 FIGURE 14-3 Three “Averages” UNIVARIATE ANALYSIS interval from the lower limit and add a full interval to the age of each successive subject, the final one is half an interval from the upper limit. What we’ve done is calculate, hypothetically, the precise ages of the eight subjects, assuming their ages were spread out evenly. Having done this, we merely note the age of the middle subject—16.31— and that is the median age for the group. Whenever the total number of subjects is an even number, of course, there is no middle case. To get the median, you merely calculate the mean of the two values on either side of the midpoint in the ranked data. Suppose, for example, that there was one more 19-year-old in our sample, giving us a total of 32 cases. The midpoint would then fall between subjects 16 and 17. The median would therefore be calculated as (16.31 0003 16.44)/ 2 0001 16.38. As you can see in Figure 14-3, the three measures of central tendency produce three different values for this set of data, which is often (but not necessarily) the case. Which measure, then, best represents the “typical” value? More generally, which measure of central tendency should you prefer? The answer depends on the nature of your data and the purpose of your analysis. For example, whenever means are presented, you should be aware that they are susceptible to extreme values—a few very large or very small numbers. As only one example, the (mean) average person in Redmond, Washington, has a net worth in excess of a million dollars. If you were to visit Redmond, however, you would not find that the “average” resident lives up to your idea of a millionaire. The very high mean reflects the influence of one extreme case among Redmond’s 40,000 residents— Bill Gates of Microsoft, who has a net worth (at the time this is being written) of tens of billions of dollars. Clearly, the median wealth would give you a more accurate picture of the residents of Redmond as a whole. This example should illustrate the need to choose carefully among the various measures of central tendency. A course or textbook in statistics will give you a fuller understanding of the variety of situations in which each is appropriate. 453 Dispersion Averages offer readers the advantage of reducing the raw data to the most manageable form: A single number (or attribute) can represent all the detailed data collected in regard to the variable. This advantage comes at a cost, of course, because the reader cannot reconstruct the original data from an average. Summaries of the dispersion of responses can somewhat alleviate this disadvantage. Dispersion refers to the way values are distributed around some central value, such as an average. The simplest measure of dispersion is the range: the distance separating the highest from the lowest value. Thus, besides reporting that our subjects have a mean age of 15.87, we might also indicate that their ages range from 13 to 19. A more sophisticated measure of dispersion is the standard deviation. This measure was briefly mentioned in Chapter 7 as the standard error of a sampling distribution. Essentially, the standard deviation is an index of the amount of variability in a set of data. A higher standard deviation means that the data are more dispersed; a lower standard deviation means that they are more bunched together. Figure 14-4 illustrates the basic idea. Notice that the professional golfer not only has a lower mean score but is also more consistent— represented by the smaller standard deviation. The duffer, on the other hand, has a higher average but dispersion The distribution of values around some central value, such as an average. The range is a simple example of a measure of dispersion. Thus, we may report that the mean age of a group is 37.9, and the range is from 12 to 89. standard deviation A measure of dispersion around the mean, calculated so that approximately 68 percent of the cases will lie within plus or minus one standard deviation from the mean, 95 percent will lie within plus or minus two standard deviations, and 99.9 percent will lie within three standard deviations. Thus, for example, if the mean age in a group is 30 and the standard deviation is 10, then 68 percent have ages between 20 and 40. The smaller the standard deviation, the more tightly the values are clustered around the mean; if the standard deviation is high, the values are widely spread out. 454 CHAPTER 14 QUANTITATIVE DATA ANALYSIS a. High standard deviation = spread-out values Mean = 100 Number of games Amateur Golfer’s Scores 68% of values b. Low standard deviation = tightly clustered values Mean = 70 Number of games Professional Golfer’s Scores 68% of values FIGURE 14-4 High and Low Standard Deviations is also less consistent: sometimes doing much better, sometimes much worse. There are many other measures of dispersion. In reporting intelligence test scores, for example, researchers might determine the interquartile range, the range of scores for the middle 50 percent of subjects. If the top one-fourth had scores ranging from 120 to 150, and if the bottom one-fourth had continuous variable A variable whose attributes form a steady progression, such as age or income. Thus, the ages of a group of people might include 21, 22, 23, 24, and so forth and could even be broken down into fractions of years. discrete variable A variable whose attributes are separate from one another, or discontinuous, as in the case of sex or religious affiliation. In other words, there is no progression from male to female in the case of sex. scores ranging from 60 to 90, the report might say that the interquartile range was from 90 to 120 (or 30 points) with a mean score of, let’s say, 102. Continuous and Discrete Variables The preceding calculations are not appropriate for all variables. To understand this point, we must distinguish between two types of variables: continuous and discrete. A continuous variable (or ratio variable) increases steadily in tiny fractions. An example is age, which increases steadily with each increment of time. A discrete variable jumps from category to category without intervening steps. Examples include sex, military rank, or year in college (you go from being a sophomore to a junior in one step). In analyzing a discrete variable—a nominal or ordinal variable, for example—some of the techniques discussed previously do not apply. Strictly speaking, modes should be calculated for nominal data, medians for interval data, and means for ratio data, not for nominal data (see Chapter 5). If the variable in question is sex, for example, raw numbers (23 of the cross-dressing outlaw bikers in our sample are women) or percentages (7 percent are women) can be appropriate and useful analyses, but neither a median nor a mean would make any sense. Calculating the mode would be legitimate, though not very revealing, because it would only tell us “most were men.” However, the mode for data on religious affiliation might be more interesting, as in “most people in the United States are Protestant.” Detail versus Manageability In presenting univariate and other data, you’ll be constrained by two goals. On the one hand, you should attempt to provide your reader with the fullest degree of detail regarding those data. On the other hand, the data should be presented in a manageable form. As these two goals often directly conflict, you’ll find yourself continually seeking the best compromise between them. One useful solution is to report a given set of data in more SUBGROUP COMPARISONS made legal?” In response, 33.4 percent said it should and 66.6 percent said it shouldn’t. Table 14-5 presents the responses given to this question by respondents in different age categories. Notice that the subgroup comparisons tell us how different groups in the population responded to this question. You can undoubtedly see a pattern in the results, though possibly not exactly what you expected; we’ll return to that in a moment. First, let’s see how another set of subgroups answered this question. Table 14-6 presents attitudes toward legalizing marijuana by different political subgroups, based on whether respondents characterized themselves as conservative or liberal. Before looking at the table, you might try your hand at hypothesizing what the results are likely to be and why. Notice that I have changed the direction of percentaging this table, to make it easier to read. To compare the subgroups in this case, you would read down the columns, not across them. Before examining the logic of causal analysis, let’s consider another example of subgroup comparisons—one that will let us address some tableformatting issues. than one form. In the case of age, for example, you might report the distribution of ungrouped ages plus the mean age and standard deviation. As you can see from this introductory discussion of univariate analysis, this seemingly simple matter can be rather complex. In any event, the lessons of this section pave the way for a consideration of subgroup comparisons and bivariate analyses. SUBGROUP COMPARISONS Univariate analyses describe the units of analysis of a study and, if they are a sample drawn from some larger population, allow us to make descriptive inferences about the larger population. Bivariate and multivariate analyses are aimed primarily at explanation. Before turning to explanation, however, we should consider the case of subgroup description. Often it’s appropriate to describe subsets of cases, subjects, or respondents. Here’s a simple example from the General Social Survey. In 2004, respondents were asked, “Should marijuana be TABLE 14-5 Marijuana Legalization by Age of Respondents, 2004 Under 21 Should be legalized Should not be legalized 100% 0001 21–35 27% 73 (34) 36–54 40% 60 (238) 55 and Older 37% 63 (338) Source: General Social Survey, 2004, National Opinion Research Center. TABLE 14-6 Marijuana Legalization by Political Orientation, 2004 Extremely liberal Liberal Slightly liberal Moderate Slightly conservative Conservative Extremely conservative 455 Should Legalize Should Not Legalize 100% ⴝ 77% 49% 35% 33% 32% 25% 16% 23 51 65 67 68 75 84 (30) (75) (92) (326) (136) (155) (37) Source: General Social Survey, 2004, National Opinion Research Center. 24% 76 (265) 456 CHAPTER 14 QUANTITATIVE DATA ANALYSIS “Collapsing” Response Categories “Textbook examples” of tables are often simpler than you’ll typically find in published research reports or in your own analyses of data, so this section and the next one address two common problems and suggest solutions. Let’s begin by turning to Table 14-7, which reports data collected in a multinational poll conducted by the New York Times, CBS News, and the Herald Tribune in 1985, concerning attitudes about the United Nations. The question reported in Table 14-7 deals with general attitudes about the way the UN was handling its job. Here’s the question: How do people in the five nations reported in Table 14-7 compare in their support for the kind of job the UN was doing? As you review the table, you may find there are simply so many numbers that it’s hard to see any meaningful pattern. Part of the problem with Table 14-7 lies in the relatively small percentages of respondents selecting the two extreme response categories: the UN is doing a very good or a very poor job. Furthermore, although it might be tempting to read only the second line of the table (those saying “good job”), that would be improper. Looking at only the second row, we would conclude that West Germany and the United States were the most positive (46 percent) about the UN’s performance, followed closely by France (45 percent), with Britain (39 percent) less positive than any of those three and Japan (11 percent) the least positive of all. This procedure is inappropriate in that it ignores all those respondents who gave the most positive answer of all: “very good job.” In a situation like this, you should combine or “collapse” the two ends of the range of variation. In this instance, combine “very good” with “good” and “very poor” with “poor.” If you were to do this in the analysis of your own data, it would be wise to add the raw frequencies together and recompute percentages for the combined categories, but in analyzing a published table such as this one, you can simply add the percentages, as illustrated by the results shown in Table 14-8. With the collapsed categories illustrated in Table 14-8, we can now rather easily read across the several national percentages of people who said the UN was doing at least a good job. Now the United States appears the most positive; Germany, Britain, and France are only slightly less positive and are nearly indistinguishable from one another; and Japan stands alone in its quite low assessment of the UN’s performance. Although the conclusions Text not available due to copyright restrictions TABLE 14-8 Collapsing Extreme Categories Good job or better Poor job or worse Don’t know West Germany Britain France Japan United States 48% 27 26 46% 37 17 47% 25 28 12% 48 41 51% 40 10 SUBGROUP COMPARISONS to be drawn now do not differ radically from what we might have concluded from simply reading the second line of Table 14-7, we should note that Britain now appears relatively more supportive. Here’s the risk I’d like to spare you. Suppose you had hastily read the second row of Table 14-7 and noted that the British had a somewhat lower assessment of the job the UN was doing than was true of people in the United States, West Germany, and France. You might feel obliged to think up an explanation for why that was so—possibly creating an ingenious psychohistorical theory about the painful decline of the once powerful and dignified British Empire. Then, once you had touted your “theory” about, someone else might point out that a proper reading of the data would show the British were actually not really less positive than the other three nations. This is not a hypothetical risk. Errors like these happen frequently, but they can be avoided by collapsing answer categories where appropriate. Handling “Don’t Knows” Tables 14-7 and 14-8 illustrate another common problem in the analysis of survey data. It’s usually a good idea to give people the option of saying “don’t know” or “no opinion” when asking for their opinions on issues. But what do you do with those answers when you analyze the data? Notice there is a good deal of variation in the national percentages saying “don’t know” in this instance, ranging from only 10 percent in the United States to 41 percent in Japan. The presence of substantial percentages saying they don’t know can confuse the results of a tables like these. For example, were the Japanese so much less likely to say the UN was doing a good job simply because so many didn’t express any opinion? TABLE 14-9 457 Here’s an easy way to recalculate percentages, with the “don’t knows” excluded. Look at the first column of percentages in Table 14-8: West Germany’s answers to the question about the UN’s performance. Notice that 26 percent of the respondents said they didn’t know. This means that those who said “good” or “bad” job—taken together—represent only 74 percent (100 minus 26) of the whole. If we divide the 48 percent saying “good job or better” by 0.74 (the proportion giving any opinion), we can say that 65 percent “of those with an opinion” said the UN was doing a good or very good job (48%/0.74 0001 65%). Table 14-9 presents the whole table with the “don’t knows” excluded. Notice that these new data offer a somewhat different interpretation than do the previous tables. Specifically, it would now appear that France and West Germany were the most positive in their assessments of the UN, with the United States and Britain a bit lower. Although Japan still stands out as lowest in this regard, it has moved from 12 percent to 20 percent positive. At this point, having seen three versions of the data, you may be asking yourself, Which is the right one? The answer depends on your purpose in analyzing and interpreting the data. For example, if it is not essential for you to distinguish “very good” from “good,” it makes sense to combine them, because it’s easier to read the table. Whether to include or exclude the “don’t knows” is harder to decide in the abstract. It may be a very important finding that such a large percentage of the Japanese had no opinion—if you wanted to find out whether people were familiar with the work of the UN, for example. On the other hand, if you wanted to know how people might vote on an issue, it might be more appropriate to exclude the “don’t knows” on the assumption that they wouldn’t vote or that ultimately they would be Omitting the “Don’t Knows” Good job or better Poor job or worse West Germany Britain France Japan United States 65% 35% 55% 45% 65% 35% 20% 81% 57% 44% 458 CHAPTER 14 QUANTITATIVE DATA ANALYSIS likely to divide their votes between the two sides of the issue. In any event, the truth contained within your data is that a certain percentage said they didn’t know and the remainder divided their opinions in whatever manner they did. Often, it’s appropriate to report your data in both forms—with and without the “don’t knows”—so your readers can also draw their own conclusions. Of course, you yourself will be a reader of such tables, drawn up by others, and knowing the logic behind constructing them will help you be a savvy consumer of quantitative data. See the box “Your Life and the Numbers” for more on this. Numerical Descriptions in Qualitative Research Although this chapter deals primarily with quantitative research, the discussions are also relevant to qualitative studies. Numerical testing can often verify the findings of in-depth, qualitative studies. Thus, for example, when David Silverman wanted to compare the cancer treatments received by patients in private clinics with those in Britain’s National Health Service, he primarily chose in-depth analyses of the interactions between doctors and patients: My method of analysis was largely qualitative and . . . I used extracts of what doctors and patients had said as well as offering a brief ethnography of the setting and of certain behavioural data. In addition, however, I constructed a coding form which enabled me to collate a number of crude measures of doctor and patient interactions. — (1993:163) Not only did the numerical data fine-tune Silverman’s impressions based on his qualitative observations, but his in-depth understanding of the situation allowed him to craft an ever more appropriate quantitative analysis. Listen to the interaction between qualitative and quantitative approaches in this lengthy discussion: My overall impression was that private consultations lasted considerably longer than those held in the NHS clinics. When examined, the IN THE REAL WORLD YOUR LIFE AND THE NUMBERS You’ll likely be analyzing quantitative data in most aspects of your life for as long as you live. Doing it well is a really good idea. Do you keep running short of money? A rigorous budgeting system may be the solution, telling you where your money is going and giving you the possibility of addressing problems. Being comfortable with quantitative analyses can also be a big help in connection with sports—whether for fun or profit. And, who knows, you may have children one day who need help with their math. Being able to assist them in that regard may impress them and make up for what they regard as your deficiencies in dress and musical taste. data indeed did show that the former were almost twice as long as the latter (20 minutes as against 11 minutes) and that the difference was statistically highly significant. However, I recalled that, for special reasons, one of the NHS clinics had abnormally short consultations. I felt a fairer comparison of consultations in the two sectors should exclude this clinic and should only compare consultations taken by a single doctor in both sectors. This subsample of cases revealed that the difference in length between NHS and private consultations was now reduced to an average of under 3 minutes. This was still statistically significant, although the significance was reduced. Finally, however, if I compared only new patients seen by the same doctor, NHS patients got 4 minutes more on the average—34 minutes as against 30 minutes in the private clinic. — (SILVERMAN 1993:163–64) This example further demonstrates the special power that can be gained from a combination of approaches in social research. The combination of qualitative and quantitative analyses can be especially potent. BIVARIATE ANALYSIS 459 BIVARIATE ANALYSIS of Table 14-10 might be taken from Charles Glock’s Comfort Hypothesis as discussed in Chapter 2: In contrast to univariate analysis, subgroup comparisons involve two variables. In this respect, subgroup comparisons constitute a kind of bivariate analysis—that is, an analysis of two variables simultaneously. However, as with univariate analysis, the purpose of subgroup comparisons is largely descriptive. Most bivariate analysis in social research adds another element: determining relationships between the variables themselves. Thus, univariate analysis and subgroup comparisons focus on describing the people (or other units of analysis) under study, whereas bivariate analysis focuses on the variables and their empirical relationships. Table 14-10 could be regarded as an instance of subgroup comparison: It independently describes the attendance of men and women at religious services, as reported in the 2004 General Social Survey. It shows—comparatively and descriptively—that the women under study attended religious services more often than did the men. However, the same table, seen as an explanatory bivariate analysis, tells a somewhat different story. It suggests that the variable sex has an effect on the variable religious service attendance. That is, we can view the behavior as a dependent variable that is partially determined by the independent variable, sex. Explanatory bivariate analyses, then, involve the “variable language” introduced in Chapter 1. In a subtle shift of focus, we are no longer talking about men and women as different subgroups but about sex as a variable: one that has an influence on other variables. The theoretical interpretation 1. Women are still treated as second-class citizens in U.S. society. 2. People denied status gratification in the secular society may turn to religion as an alternative source of status. 3. Hence, women should be more religious than men. TABLE 14-10 Religious Attendance Reported by Men and Women in 2004 Weekly Less often 100% 0001 Men Women 22% 78 (1,276) 31% 69 (1,525) Source: General Social Survey, 2004, National Opinion Research Center. The data presented in Table 14-10 confirm this reasoning. Thirty-one percent of the women attend religious services weekly, as compared with 22 percent of the men. Adding the logic of causal relationships among variables has an important implication for the construction and reading of percentage tables. One of the chief bugaboos for new-data analysts is deciding on the appropriate “direction of percentaging” for any given table. In Table 14-10, for example, I’ve divided the group of subjects into two subgroups— men and women—and then described the behavior of each subgroup. That is the correct method for constructing this table. Notice, however, that we could—however inappropriately—construct the table differently. We could first divide the subjects into different degrees of religious attendance and then describe each of those subgroups in terms of the percentage of men and women in each. This method would make no sense in terms of explanation, however. Table 14-10 suggests that your sex will affect your frequency of religious service attendance. Had we used the other method of construction, the table would suggest that your religious service attendance affects whether you are a man or a woman—which makes no sense. Your behavior cannot determine your sex. A related problem complicates the lives of newdata analysts. How do you read a percentage table? There is a temptation to read Table 14-10 as follows: “Of the women, only 31 percent attended bivariate analysis The analysis of two variables simultaneously, for the purpose of determining the empirical relationship between them. The construction of a simple percentage table or the computation of a simple correlation coefficient are examples of bivariate analyses. 460 CHAPTER 14 QUANTITATIVE DATA ANALYSIS religious services weekly, and 69 percent said they attended less often; therefore, being a woman makes you less likely to attend religious services frequently.” This is, of course, an incorrect reading of the table. Any conclusion that sex—as a variable—has an effect on religious service attendance must hinge on a comparison between men and women. Specifically, we compare the 31 percent with the 22 percent and note that women are more likely than men to attend religious services weekly. The comparison of subgroups, then, is essential in reading an explanatory bivariate table. In constructing and presenting Table 14-10, I’ve used a convention called percentage down. This term means that you can add the percentages down each column to total 100 percent. You read this form of table across a row. For the row labeled “Weekly,” what percentage of the men attend weekly? What percentage of the women attend weekly? The direction of percentaging in tables is arbitrary, and some researchers prefer to percentage across, as I did in Table 14-6. They would organize Table 14-10 so that “Men” and “Women” were shown on the left side of the table, identifying the two rows, and “Weekly” and “Less often” would appear at the top to identify the columns. The actual numbers in the table would be moved around accordingly, and each row of percentages would total 100 percent. In that case, you would read the table down a column, still asking what percentage of men and women attended frequently. The logic and the conclusion would be the same in either case; only the form would differ. In reading a table that someone else has constructed, therefore, you need to find out in which direction it has been percentaged. Usually this will be labeled or be clear from the logic of the variables being analyzed. As a last resort, however, you should add the percentages in each column and each row. If each of the columns totals 100 percent, the table has been percentaged down. If the rows total 100 percent each, it has been percentaged across. The rule, then, is as follows: 1. If the table is percentaged down, read across. 2. If the table is percentaged across, read down. Percentaging a Table Figure 14-5 reviews the logic by which we create percentage tables from two variables. I’ve used as variables sex and attitudes toward equality for men and women. Here’s another example. Suppose we’re interested in learning something about newspaper editorial policies regarding the legalization of marijuana. We undertake a content analysis of editorials on this subject that have appeared during a given year in a sample of daily newspapers across the nation. Each editorial has been classified as favorable, neutral, or unfavorable toward the legalization of marijuana. Perhaps we wish to examine the relationship between editorial policies and the types of communities in which the newspapers are published, thinking that rural newspapers might be more conservative in this regard than urban ones. Thus, each newspaper (hence, each editorial) has been classified in terms of the population of the community in which it is published. Table 14-11 presents hypothetical data describing the editorial policies of rural and urban newspapers. Note that the unit of analysis in this example is the individual editorial. Table 14-11 tells us that there were 127 editorials about marijuana in our sample of newspapers published in communities with populations under 100,000. (Note that this cutting point is chosen for simplicity of illustration and does not mean that rural refers to a community of less than 100,000 in any absolute sense.) Of these, 11 percent (14 editorials) were favorable toward legalization of marijuana, 29 percent were neutral, and 60 percent were unfavorable. Of the TABLE 14-11 Hypothetical Data Regarding Newspaper Editorials on the Legalization of Marijuana Editorial Policy Toward Legalizing Marijuana Favorable Neutral Unfavorable 100% 0001 Community Size Under 100,000 Over 100,000 11% 29 60 (127) 32% 40 28 (438) a. Some men and women who either favor (+) gender equality or don’t (–) favor it. + – + + + + – + – + + – + + – + + + – + Women + + + – + + + + – + Men b. Separate the men and the women (the independent variable). + – + + – – + + – + Women + + + + + + Men c. Within each gender group, separate those who favor equality from those who don’t (the dependent variable). + + + + + + + + – – – – – – Men Women d. Count the numbers in each cell of the table. 8 6 + + + + + + + + + + + + + + 2 4 – – – – – – e. What percentage of the women favor equality? f. What percentage of the men favor equality? 80% + + 60% + + + + – – – – – – + + + + + + g. Conclusions. While a majority of both men and women favored gender equality, women were more likely than men to do so. Thus, gender appears to be one of the causes of attitudes toward sexual equality. FIGURE 14-5 Percentaging a Table Favor equality Don’t favor equality Total + + Women Men 80% 60% 20 40 100% 100% 462 CHAPTER 14 QUANTITATIVE DATA ANALYSIS 438 editorials that appeared in our sample of newspapers published in communities of more than 100,000 residents, 32 percent (140 editorials) were favorable toward legalizing marijuana, 40 percent were neutral, and 28 percent were unfavorable. When we compare the editorial policies of rural and urban newspapers in our imaginary study, we find—as expected—that rural newspapers are less favorable toward the legalization of marijuana than are urban newspapers. We determine this by noting that a larger percentage (32 percent) of the urban editorials were favorable than the percentage of rural ones (11 percent). We might note as well that more rural than urban editorials were unfavorable (60 percent compared with 28 percent). Note that this table assumes that the size of a community might affect its newspapers’ editorial policies on this issue, rather than that editorial policy might affect the size of communities. Constructing and Reading Bivariate Tables Let’s now review the steps involved in the construction of explanatory bivariate tables: 1. The cases are divided into groups according to the attributes of the independent variable. 2. Each of these subgroups is then described in terms of attributes of the dependent variable. 3. Finally, the table is read by comparing the independent variable subgroups with each other in terms of a given attribute of the dependent variable. Let’s repeat the analysis of sex and attitude on gender equality following these steps. For the reasons outlined previously, sex is the independent variable; attitude toward gender equality constitutes the dependent variable. Thus, we proceed as follows: 1. The cases are divided into men and women. 2. Each gender subgrouping is described in terms of approval or disapproval of gender equality. contingency table A format for presenting the relationships among variables as percentage distributions. 3. Men and women are compared in terms of the percentages approving of gender equality. In the example of editorial policies regarding the legalization of marijuana, size of community is the independent variable, and a newspaper’s editorial policy the dependent variable. The table would be constructed as follows: 1. Divide the editorials into subgroups according to the sizes of the communities in which the newspapers are published. 2. Describe each subgroup of editorials in terms of the percentages favorable, neutral, or unfavorable toward the legalization of marijuana. 3. Compare the two subgroups in terms of the percentages favorable toward the legalization of marijuana. Bivariate analyses typically have an explanatory causal purpose. These two hypothetical examples have hinted at the nature of causation as social scientists use it. Tables such as the ones we’ve been examining are commonly called contingency tables: Values of the dependent variable are contingent on (depend on) values of the independent variable. Although contingency tables are common in social science, their format has never been standardized. As a result, you’ll find a variety of formats in research literature. As long as a table is easy to read and interpret, there’s probably no reason to strive for standardization. However, there are several guidelines that you should follow in the presentation of most tabular data: 1. A table should have a heading or a title that succinctly describes what is contained in the table. 2. The original content of the variables should be clearly presented—in the table itself if at all possible or in the text with a paraphrase in the table. This information is especially critical when a variable is derived from responses to an attitudinal question, because the meaning of the responses will depend largely on the wording of the question. 3. The attributes of each variable should be clearly indicated. Though complex categories will have to be abbreviated, their meaning 463 INTRODUCTION TO MULTIVARIATE ANALYSIS should be clear in the table and, of course, the full description should be reported in the text. 4. When percentages are reported in the table, the base on which they are computed should be indicated. It’s redundant to present all the raw numbers for each category, because these could be reconstructed from the percentages and the bases. Moreover, the presentation of both numbers and percentages often confuses a table and makes it more difficult to read. 5. If any cases are omitted from the table because of missing data (“no answer,” for example), their numbers should be indicated in the table. Although I have introduced the logic of causal, bivariate analysis in terms of percentage tables, many other formats are appropriate to this topic. Scatterplot graphs are one possibility, providing a visual display of the relationship between two variables. For an engaging example of this, you might check out the GapMinder software available on the web. Using countries as the unit of analysis, you can examine the relationship between birthrate and infant mortality, for example. In fact, you can watch the relationship develop over time. You can find GapMinder at http://tools .google.com/gapminder/. INTRODUCTION TO MULTIVARIATE ANALYSIS The logic of multivariate analysis, or the analysis of more than two variables simultaneously, can be seen as an extension of bivariate analysis. Specifically, we can construct multivariate tables on the basis of a more complicated subgroup description by following essentially the same steps outlined for bivariate tables. Instead of one independent variable and one dependent variable, however, we’ll have more than one independent variable. Instead of explaining the dependent variable on the basis of a single independent variable, we’ll seek an explanation through the use of more than one independent variable. TABLE 14-12 Multivariate Relationship: Religious Service Attendance, Sex, and Age “How often do you attend religious services?” Under 40 About weekly* Less often 100% 0001 40 and Older Men Women Men Women 16% 84 (495) 24% 76 (602) 26% 74 (781) 35% 65 (923) *About weekly 0001 “More than once a week,” “Weekly,” and “Nearly every week.” Source: General Social Survey, 2004, National Opinion Research Center. Let’s return to the example of religious attendance. Suppose we believe that age would also affect such behavior (Glock’s Comfort Hypothesis suggests that older people are more religious than younger people). As the first step in table construction, we would divide the total sample into subgroups based on the attributes of both independent variables simultaneously: younger men, older men, younger women, and older women. Then the several subgroups would be described in terms of the dependent variable, religious service attendance, and comparisons would be made. Table 14-12, from an analysis of the 2004 General Social Survey data, is the result. Table 14-12 has been percentaged down and therefore should be read across. The interpretation of this table warrants several conclusions: 1. Among both men and women, older people attend religious services more often than do younger people. Among women, 24 percent of those under 40 and 35 percent of those 40 and older attend religious services weekly. Among men, the respective figures are 16 and 26 percent. 2. Within each age group, women attend slightly more frequently than men. Among those respondents under 40, 24 percent of the women attend weekly, compared with 16 percent of the men. Among those 40 and over, 35 percent multivariate analysis The analysis of the simultaneous relationships among several variables. Examining simultaneously the effects of age, sex, and social class on religiosity would be an example of multivariate analysis. 464 CHAPTER 14 QUANTITATIVE DATA ANALYSIS of the women and 26 percent of the men attend weekly. 3. As measured in the table, age appears to have a greater effect on attendance at religious services than does sex. 4. Age and sex have independent effects on religious service attendance. Within a given attribute of one independent variable, different attributes of the second still affect behaviors. 5. Similarly, the two independent variables have a cumulative effect on behaviors. Older women attend the most often (35 percent), and younger men attend the least often (16 percent). Before I conclude this section, it will be useful to note an alternative format for presenting such data. Several of the tables presented in this chapter are somewhat inefficient. When the dependent variable, religious attendance, is dichotomous (having exactly two attributes), knowing one attribute permits the reader to reconstruct the other easily. Thus, if we know that 24 percent of the women under 40 attend religious services weekly, then we know automatically that 76 percent attend less often. So reporting the percentages who attend less often is unnecessary. On the basis of this recognition, Table 14-12 could be presented in the alternative format of Table 14-13. In Table 14-13, the percentages of people saying they attend religious services about weekly are reported in the cells representing the intersections of the two independent variables. The numbers presented in parentheses below each percentage represent the number of cases TABLE 14-13 A Simplification of Table 14-12 Percent Who Attend about Weekly Under 40 40 and Older Men Women 16 (495) 26 (781) 24 (602) 35 (923) Source: General Social Survey, 2004, National Opinion Research Center. on which the percentages are based. Thus, for example, the reader knows there are 602 women under 40 years of age in the sample, and 24 percent of them attend religious services weekly. We can calculate from this that 147 of those 602 women attend weekly and that the other 455 younger women (or 76 percent) attend less frequently. This new table is easier to read than the former one, and it does not sacrifice any detail. SOCIOLOGICAL DIAGNOSTICS The multivariate techniques we are now exploring can serve as powerful tools for diagnosing social problems. They can be used to replace opinions with facts and to settle ideological debates with data analysis. For an example, let’s return to the issue of gender and income. Many explanations have been advanced to account for the long-standing pattern of women in the labor force earning less than men. One explanation is that, because of traditional family patterns, women as a group have participated less in the labor force and many only begin working outside the home after completing certain child-rearing tasks. Thus, women as a group will probably have less seniority at work than will men, and income increases with seniority. An important 1984 study by the Census Bureau showed this reasoning to be partly true, as Table 14-14 shows. Table 14-14 indicates, first of all, that job tenure did indeed affect income. Among both men and women, those with more years on the job earned more. This is seen by reading down the first two columns of the table. The table also indicates that women earned less than men, regardless of job seniority. This can be seen by comparing average wages across the rows of the table, and the ratio of women-to-men wages is shown in the third column. Thus, years on the job was an important determinant of earnings, but seniority did not adequately explain the pattern of women earning less than men. In fact, we see that women with 10 or more years on the job earned substantially less ($7.91/hour) than men with less than two years ($8.46/hour). SOCIOLOGICAL DIAGNOSTICS 465 TABLE 14-14 Gender, Job Tenure, and Income, 1984 (Full-time workers 21–64 years of age) Years Working with Average Hourly Income Current Employer Men Less than 2 years 2 to 4 years 5 to 9 years 10 years or more $8.46 $9.38 $10.42 $12.38 Women Women/Men Ratio $6.03 $6.78 $7.56 $7.91 0.71 0.72 0.73 0.64 Source: U.S. Bureau of the Census, Current Population Reports, Series P-70, No. 10, MaleFemale Differences in Work Experience, Occupation, and Earning, 1984 (Washington, DC: U.S. Government Printing Office, 1987), 4. Although years on the job did not fully explain the difference between men’s and women’s pay, there are other possible explanations: level of education, child care responsibilities, and so forth. The researchers who calculated Table 14-14 also examined some of the other variables that might reasonably explain the differences in pay without representing gender discrimination, including these: • • • • • • • • • • • • • • • • • • • • • Number of years in the current occupation Total years of work experience (any occupation) Whether they have usually worked full time Marital status Size of city or town they live in Whether covered by a union contract Type of occupation Number of employees in the firm Whether private or public employer Whether they left previous job involuntarily Time spent between current and previous job Race Whether they have a disability Health status Age of children Whether they took an academic curriculum in high school Number of math, science, and foreign language classes in high school Whether they attended private or public high school Educational level achieved Percentage of women in the occupation College major Each of the variables listed here might reasonably affect earnings and, if women and men differ in these regards, could help to account for male/female income differences. When all these variables were taken into account, the researchers were able to account for 60 percent of the discrepancy between the incomes of men and women. The remaining 40 percent, then, is a function of other “reasonable” variables and/or prejudice. This kind of conclusion can be reached only by examining the effects of several variables at the same time— that is, through multivariate analysis. I hope this example shows how the logic implicit in day-to-day conversations can be represented and tested in a quantitative data analysis like this. See the box “Salary Discrimination against Women” for more on this issue. As another example of multivariate data analysis in real life, consider the common observation that minority group members are more likely to be denied bank loans than are white applicants. A counterexplanation might be that the minority applicants in question were more likely to have had a prior bankruptcy or that they had less collateral to guarantee the requested loan—both reasonable bases for granting or denying loans. However, the kind of multivariate analysis we’ve just examined could easily resolve the disagreement. Let’s say we look only at those who have not had a prior bankruptcy and who have a certain level of collateral. Are whites and minorities equally likely to get the requested loan? We could conduct the same analysis in subgroups determined by level of collateral. If whites and minorities were equally 466 CHAPTER 14 QUANTITATIVE DATA ANALYSIS IN THE REAL WORLD SALARY DISCRIMINATION AGAINST WOMEN The data in the text pointed to salary discrimination against women in 1984, but hasn’t that been remedied? Not really, as indicated by more recent data. In 2003 the average full-time, year-round male worker earned $53,039. The average full-time, year-round female worker earned $37,197, or about 70 percent as much as her male counterpart (U.S. Bureau of the Census 2006:467). But does that difference represent gender discrimination, or does it reflect legitimate factors? For example, some people argue that education affects income and that, in the past, women have gotten less education than men. We might start, therefore, by checking whether educational differences explain why women today earn less, on average, than men. Table 14-15, which we discussed briefly in Chapter 11, offers data to test this hypothesis. As the table shows, at each level of comparable education, women earn substantially less than men. Clearly, education does not explain the discrepancy. This is the kind of analysis you are now equipped to undertake. likely to get their loans in each of the subgroups, we would need to conclude that there was no ethnic discrimination. If minorities were still less likely to get their loans, however, that would indicate that bankruptcy and collateral differences were not the explanation—strengthening the case that discrimination was at work. All this should make clear that social research can play a powerful role in serving in the human community. It can help us determine the current state of affairs and can often point the way to where we want to go. Welcome to the world of sociological diagnostics! ETHICS AND QUANTITATIVE DATA ANALYSIS In Chapter 13, I pointed out that the subjectivity present in qualitative data analysis increases the risk of biased analyses, which experienced researchers learn to avoid. Some think, however, that quantitative analyses are not susceptible to subjective biases. Unfortunately, this isn’t so. Even the most mathematically explicit analysis yields ample room for defining and measuring variables in ways that encourage one finding over another, and quantitative analysts need to guard against this. Sometimes, the careful specification of hypotheses in advance can offer protection, although this can also inhibit a full exploration of what data can tell us. WHAT DO YOU THINK? REVISITED This chapter began with a question about whether anything meaningful or useful could be learned from the analysis of data that have been stripped of many details in order to permit statistical manipulation. The answer, we’ve seen, is an unqualified “yes.” Quantitative analysis can be a tool for social change. For instance, calculating the average incomes of men and women or of whites and minorities can demonstrate the inequalities that exist for people doing exactly the same job. Such quantitative analyses can overpower anecdotal evidence about particular women or minorities earning large salaries. We’ve also seen that quantitative analyses of qualitative phenomena, such as voting intentions, can be done with precision and utility. The key lesson is that both qualitative and quantitative research are legitimate and powerful approaches to understanding social life. They are particularly useful, moreover, when used together. MAIN POINTS TABLE 14-15 467 Average Earnings of Year-Round, Full-Time Workers, 2003 All workers Less than 9th grade 9th–12th grades HS graduates Some college Associate degree Bachelors or more Men Women 53,039 23,972 29,100 38,331 46,332 48,683 81,007 37,197 20,979 21,426 27,956 31,655 36,528 53,215 Ratio of Women/ Men Earnings 0.70 0.88 0.74 0.73 0.68 0.75 0.66 Source: U.S. Bureau of the Census, Statistical Abstract of the United States (Washington, DC: U.S. Government Printing Office, 2006), Table 686, p. 467. You can also access this table online at http:// www.census.gov/prod/2005pubs/06statab/income.pdf. The quantitative analyst has an obligation to report formal hypotheses and less formal expectations that didn’t pan out. Let’s suppose you think that a particular variable will prove a powerful cause of gender prejudice, but your data analysis contradicts that expectation. You should report the lack of correlation, because such information is useful to others who conduct research on this topic. Although it would be more satisfying to discover what causes prejudice, it’s quite important to know what doesn’t cause it. The protection of subjects’ privacy is as important in quantitative analysis as in qualitative analysis. However, with quantitative methods it’s often easier to collect and record data in ways that make subject identification more difficult. However, the first time public officials demand that you reveal the names of student subjects who reported using illegal drugs in a survey, this issue will take on more salience. (Don’t reveal the names, by the way. If necessary, burn the questionnaires—accidentally.) Main Points ❏ Researchers may use existing coding schemes, such as the Census Bureau’s categorization of occupations, or develop their own coding categories. In either case, the coding scheme must be appropriate to the nature and objectives of the study. ❏ A codebook is the document that describes the identifiers assigned to different variables and the codes assigned to represent the attributes of those variables. Introduction ❏ Most data are initially qualitative: They must be quantified to permit statistical analysis. ❏ Quantitative analysis involves the techniques by which researchers convert data to a numerical form and subject it to statistical analyses. Quantification of Data ❏ Some data, such as age and income, are intrinsically numerical. ❏ Often, quantification involves coding into categories that are then given numerical representations. Univariate Analysis ❏ Univariate analysis is the analysis of a single variable. Because univariate analysis does not involve the relationships between two or more variables, its purpose is descriptive rather than explanatory. 468 ❏ CHAPTER 14 QUANTITATIVE DATA ANALYSIS Several techniques allow researchers to summarize their original data to make them more manageable while maintaining as much of the original detail as possible. Frequency distributions, averages, grouped data, and measures of dispersion are all ways of summarizing data concerning a single variable. Subgroup Comparisons ❏ Subgroup comparisons can be used to describe similarities and differences among subgroups with respect to some variable. ❏ Collapsing response categories and handling “don’t knows” are two techniques for presenting and interpreting data. Bivariate Analysis ❏ Bivariate analysis focuses on relationships between variables rather than comparisons of groups. Bivariate analysis explores the statistical association between the independent variable and the dependent variable. Its purpose is usually explanatory rather than merely descriptive. ❏ The results of bivariate analyses often are presented in the form of contingency tables, which are constructed to reveal the effects of the independent variable on the dependent variable. Introduction to Multivariate Analysis ❏ Multivariate analysis is a method of analyzing the simultaneous relationships among several variables. It may also be used to understand the relationship between two variables more fully. ❏ The logic and techniques of quantitative research can be valuable to qualitative researchers. Sociological Diagnostics ❏ Sociological diagnostics is a quantitative analysis technique for determining the nature of social problems such as ethnic or gender discrimination. Ethics and Quantitative Data Analysis ❏ Unbiased analysis and reporting is as much an ethical concern in quantitative analysis as in qualitative analysis. ❏ Subjects’ privacy must be protected in quantitative data analysis and reporting. Key Terms average mean bivariate analysis median codebook mode contingency table multivariate analysis continuous variable quantitative analysis discrete variable standard deviation dispersion univariate analysis frequency distribution Review Questions 1. How might the various majors at your college be classified into categories? Create a coding system that would allow you to categorize them according to some meaningful variable. Then create a different coding system, using a different variable. 2. How many ways could you be described in numerical terms? What are some of your intrinsically numerical attributes? Could you express some of your qualitative attributes in quantitative terms? 3. How would you construct and interpret a contingency table from the following information: 150 Democrats favor raising the minimum wage, and 50 oppose it; 100 Republicans favor raising the minimum wage, and 300 oppose it? 4. Using the hypothetical data in the following table, how would you construct and interpret tables showing these three relationships? a. The bivariate relationship between age and attitude toward abortion b. The bivariate relationship between political orientation and attitude toward abortion ADDITIONAL READINGS c. The multivariate relationship linking age, political orientation, and attitude toward abortion Age Political Orientation Attitude toward Abortion Frequency Young Young Young Young Old Old Old Old Liberal Liberal Conservative Conservative Liberal Liberal Conservative Conservative Favor Oppose Favor Oppose Favor Oppose Favor Oppose 90 10 60 40 60 40 20 80 469 Additional Readings Babbie, Earl, Fred Halley, and Jeanne Zaino. 2000. Adventures in Social Research. Newbury Park, CA: Pine Forge Press. This book introduces you to the analysis of social research data through SPSS for Windows. Several of the basic statistical techniques used by social researchers are discussed and illustrated. Bernstein, Ira H., and Paul Havig. 1999. Computer Literacy: Getting the Most from Your PC. Thousand Oaks, CA: Sage. Here’s a quick overview of the various ways social scientists use computers, including many common applications programs. Davis, James. 1971. Elementary Survey Analysis. Englewood Cliffs, NJ: Prentice-Hall. An extremely wellwritten and well-reasoned introduction to analysis. Online Study Resources In addition to covering the materials just presented in this chapter, Davis’s book is well worth reading in terms of measurement and statistics. Ferrante, Joan, and Angela Vaughn. 1999. Let’s Go Sociology: Travels on the Internet. Belmont, CA: Wadsworth. This accessible little book gives an excellent intro- Go to duction to the Internet and suggests many websites http://sociology.wadsworth.com/babbie_basics4e and click on ThomsonNow for access to this powerful online study tool. You will get a personalized study plan based on your responses to a diagnostic pretest. Once you have mastered the material with the help of interactive learning tools, you can take a posttest to confirm that you are ready to move on to the next chapter. of interest to social researchers. Lewis-Beck, Michael. 1995. Data Analysis: An Introduction. Volume 103 in the Quantitative Application in the Social Sciences series. Thousand Oaks, CA: Sage. This popular short book makes statistical language accessible to the novice. You should enjoy the clarity of explanations and the thorough use of examples. Nardi, Peter. 2006. Interpreting Data: A Guide to Understanding Research. Boston: Pearson. This excellent little book offers an accessible guide to understand- Website for The Basics of Social Research 4th edition ing commonly used statistical analyses in the social sciences. Newton, Rae R., and Kjell Erik Rudestam. 1999. Your Statistical Consultant: Answers to Your Data Analy- At the book companion website (http://sociology .wadsworth.com/babbie_basics4e) you will find many resources in addition to ThomsonNow to aid you in studying for your exams. For example, you will find Tutorial Quizzes with feedback, Internet Exercises, Flashcards, and Chapter Tutorials, as well as Extended Projects, InfoTrac College Edition search terms, Social Research in Cyberspace, GSS Data, Web Links, and primers for using various data analysis software such as SPSS and NVivo. sis Questions. Thousand Oaks, CA: Sage. Excellent reader-friendly manual that will answer all sorts of questions you have or will have as soon as you begin to analyze quantitative data. Ziesel, Hans. 1957. Say It with Figures. New York: Harper & Row. An excellent discussion of table construction and other elementary analyses. Though many years old, this is still perhaps the best available presentation of that specific topic. It is eminently readable and understandable and has many concrete examples. HUMAN READINGINQUIRY AND WRITING AND SCIENCE SOCIAL RESEARCH PhotoHenley/CORBIS credit John 151 This Chapter Chapter What You’ll Learn in this Social research is the useless communicated effectively to mistakes others. There We’ll examine way unless people it’s learn about their world and the they are specialalong skillsthe involved in reading the research of others and writing aboutfrom make way. We’ll also begin to see what makes sience different your own. other ways of knowing things. 2S 1S N L 470 In this chapter . . . WHAT DO YOU THINK? Introduction Reading Social Research Organizing a Review of the Literature Journals versus Books Evaluation of Research Reports Using the Internet Wisely Writing Social Research Some Basic Considerations Organization of the Report Guidelines for Reporting Analyses Going Public The Ethics of Reading and Writing Social Research Image not available due to copyright restrictions The Internet seems like a great place to get information for term papers, but some of your professors may object, saying the quality of data on the Internet can’t be trusted. What should you do? First, read this chapter. Then, see the “What Do You Think? Revisited” box toward the end of the chapter. INTRODUCTION Meaningful scientific research is inextricably wed to communication, but it’s not always an easy or comfortable marriage. Scientists—social and other—are not necessarily good at communicating their methods and findings. Thus, it’s often hard to read and understand the research of others, and you may also find it difficult to write up your own research in ways that communicate your ideas effectively. This final chapter addresses these two problems. We’ll begin with reading social research, then we’ll turn to writing it. Although I’ll offer guidance on both topics, you’ll find that mastering each lies in practice. The more you read social science research, the easier it gets, and the same is true of writing it. READING SOCIAL RESEARCH “Reading” is not as simple a task as it may seem, especially when it involves social research. First, you need to organize a review of the literature in order to focus on the resources that will help you the most. Then, when you actually sit down to read them, you’ll need certain skills for doing so efficiently. Finally, you should know how to find and assess sources on the Internet. Organizing a Review of the Literature With the exception of some grounded theory methodologists, most social researchers begin the design of a research project with a review of the literature, as indicated in Chapter 4. Most original research is seen as an extension of what has previously been learned about a particular topic. A review of the literature is the way we learn what’s already known and not known. In most cases, you should organize your search of the literature around the key concepts you wish to study; alternatively, you may want to study a certain population: Iraqi War veterans, computer hackers, Catholic priests, gay athletes, and so forth. In any case, you’ll identify a set of terms that represent your core interests. 471 472 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH Your college or university library will probably have several search routines you can use at the library or online. Let’s say you’re interested in designing a study of attitudes toward capital punishment. If your library provides access to InfoTrac College Edition or a similar program, you might discover, as I just did, 8,735 newspaper references and 5,489 periodical references to capital punishment. In such situations, InfoTrac College Edition is indexed to allow narrowing the search, and I soon discovered 249 entries for “public opinion” on capital punishment. Some of the entries were bibliographic citations and some were full-text articles I could read online. Another resource available to everyone is the Library of Congress, easily accessed online at http://catalog.loc.gov/. Clicking on “Basic Search” or “Guided Search” will open up a vast resource for you. When I specified the keyword as “capital punishment” and limited the search to English-language books published between 2000 and 2005, the site listed 3,674 entries, such as the following: • • • • • Abolition of the death penalty : SAHRDC’s submission to the National Commission for the Review of the Working of the Constitution. America’s experiment with capital punishment : reflections on the past, present, and future of the ultimate penal sanction / [edited by] James R. Acker. Beyond repair? : America’s death penalty / edited by Stephen P. Garvey. Capital punishment : a bibliography / C. Cliff, editor. Death penalty : influences and outcomes / edited by Austin Sarat. Sometimes a simple web search is a useful way to begin. Use a search engine such as Google, HotBot, or Yahoo to look for web resources on “capital punishment” or “death penalty.” Be sure to use quotation marks to look for a phrase rather than abstract A summary of a research article. The abstract usually begins the article and states the purpose of the research, the methods used, and the major findings. using two separate words. You might also add “public opinion” to the request to narrow the field of possible resources. In general, online searches tend to turn up huge numbers of entries, most of which will not help you much. You’ll need some time to separate the wheat from the chaff. Later in this chapter, I’ll give you more-detailed guidelines for searching the web. No matter how you start the literature review process, you should always consider a technique akin to snowball sampling, discussed in Chapter 7. Once you identify a particularly useful book or article, note which publications its author cites. Some of these will likely be useful. In fact, you’ll probably discover some citations that appear again and again, suggesting that they’re core references within the subject matter area you’re exploring. This last point is important, because the literature review is not about providing “window dressing” in the form of a few citations. Rather, it’s about digging into the body of knowledge that previous researchers have generated—and taking advantage of that knowledge as you design your own inquiry. Once you’ve identified some potential resources, you must read them and find anything of value to your project. Here are some guidelines for reading research publications. Journals versus Books As you might have guessed, you don’t read a social research report the way you’d read a novel. You can, of course, but it’s not the most effective approach. Journal articles and books are laid out somewhat differently, so here are some initial guidelines for reading each. Reading a Journal Article In most journals, each article begins with an abstract. Read it first. It should tell you the purpose of the research, the methods used, and the major findings. In a good detective or spy novel, the suspense builds throughout the book and is resolved in some kind of surprise ending. This is not the effect most scholarly writers are going for. Social research is purposely anticlimactic. Rather than stringing the reader along, dragging out the suspense over whether X causes Y, social researchers willingly give away the punch line in the abstract. The abstract serves two main functions. First, it gives you a good idea as to whether you’ll want to read the rest of the article. If you’re reviewing the literature for a paper you’re writing, the abstract tells you whether that particular article is relevant. Second, the abstract establishes a framework within which to read the rest of the article. It may raise questions in your mind regarding method or conclusions, thereby creating an agenda to pursue in your reading. (It’s not a bad idea to jot those questions down, to be sure you get answers to them.) After you’ve read the abstract, you might go directly to the summary and/or conclusions at the end of the article. That will give you a more detailed picture of what the article is all about. (You can also do this with detective and spy novels; it makes reading them a lot faster but maybe not as much fun.) Jot down any new questions or observations that occur to you. Next, skim the article, noting the section headings and any tables or graphs. You don’t need to study any of these things in your skimming, though it’s okay to dally with anything that catches your attention. By the end of this step, you should start feeling familiar with the article. You should be pretty clear on the researcher’s conclusions and have a general idea of the methods used in reaching them. Now, when you carefully read the whole article, you’ll have a good idea of where it’s heading and how each section fits into the logic of the whole article. Keep taking notes. Mark any passages you think you might like to quote later on. After carefully reading the article, it’s a good idea to skim it quickly one more time. This way you get back in touch with the forest after having focused on the trees. If you want to fully grasp what you’ve just read, find someone else to explain it to. If you’re doing the reading in connection with a course, you should have no trouble finding someone willing to 473 Earl Babbie READING SOCIAL RESEARCH There’s nothing like sinking your teeth into a good book. listen. If you can explain it coherently to someone who has no prior contact with the subject matter, however, you’ll have an absolute lock on the material. Reading a Book The approach for reading articles can be adapted to reading a book-length report, sometimes also called a research monograph. These longer research reports cover the same basic terrain and structure. Instead of an abstract, the preface and opening chapter of the book should lay out the purpose, method, and main findings of the study. The preface tends to be written more informally and to be easier to understand than an abstract. research monograph A book-length research report, either published or unpublished. This is distinguished from a textbook, a book of essays, a novel, and so forth. 474 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH As with an article, it’s useful to skim through the book, getting a sense of its organization; its use of tables, graphs, or other visuals; and so forth. You should come away from this step feeling somewhat familiar with the book. And, as I suggested in connection with reading an article, you should take notes as you go along, writing down things you observe and questions that are raised. As you settle in to read the book more carefully, you should repeat this same process with each chapter. Read the opening paragraphs to get a sense of what’s to come, and then skip to the concluding paragraphs for the summary. Skim the chapter to increase your familiarity with it, and then read more deliberately, taking notes as you go. It’s sometimes okay to skip portions of a scholarly book, unlike the way you were taught to read and appreciate literature. This all depends on your purpose in reading it in the first place. Perhaps there are only a few portions of the book that are relevant to your purposes. However, realize that if you’re interested in the researcher’s findings, you must pay some attention to the methods used (for instance, who was studied, how, when?) in order to judge the quality of the conclusions offered by the author. • • • • Research Design • • • Evaluation of Research Reports In this section, I provide sets of questions you might ask in reading and evaluating a research report. I’ve organized these questions to parallel some of the preceding chapters in this book, to facilitate your getting more details on a topic if necessary. Although hardly exhaustive, I hope these questions will help you grasp the meanings of research reports you read and alert you to potential problems in them. • • Theoretical Orientations • • Is there a theoretical aspect to the study, or do no references to theory appear? Can you identify the researcher’s chief paradigm or theoretical orientation? Authors quoted in the report’s review of the literature and elsewhere may offer a clue. On the other hand, is the author attempting to refute some paradigm or theory? Is a theory or hypothesis being tested? In what way has the theoretical orientation shaped the methodology used in the study, such as the data-collection technique and the choice of which data were collected and which were ignored? Is the methodology used appropriate to the theoretical issues involved? • What was the purpose of the study: exploration, description, explanation, or a combination? Who conducted the research? Who paid for it, if anyone? What motivated the study? If the study’s conclusions happen to correspond to the interests of the sponsor or researcher, this doesn’t disqualify the conclusions, but you’ll want to be especially wary. What was the unit of analysis? Was it appropriate to the purpose of the study? Are the conclusions drawn from the research appropriate to the unit of analysis? For example, have the researchers studied cities and ended up with assertions about individuals? Is this a cross-sectional or a longitudinal study? Be especially wary of longitudinal assertions being made on the basis of crosssectional observations. If longitudinal data have been collected, have comparable measurements been made at each point in time? In the case of survey data, have the same questions been asked each time? If the report compares, say, crime or poverty rates, are they defined the same way each time? (Definitions of poverty, for example, change frequently.) If a panel study has been conducted, how many people dropped out over the course of the study? READING SOCIAL RESEARCH Measurement • • • • • What are the names of the concepts under study? Has the researcher delineated different dimensions of the variables? Do the analysis and reporting maintain those distinctions? What indicators—either qualitative or quantitative—have been chosen as measures of those dimensions and concepts? Is each indicator a valid measure of what it’s intended to measure? What else could the indicator be a measure of? Is it a reliable measure? Has the reliability been tested? What is the level of measurement of each variable: nominal, ordinal, interval, or ratio? Is it the appropriate level? Have composite measurements (indexes, scales, or typologies) been used? If so, are they appropriate to the purpose of the study? Have they been constructed correctly? • • • • Sampling • • • • • • Was it appropriate to study a sample, or should all elements have been studied? Remember, it’s not always feasible to select a random sample. If sampling was called for, were probability sampling methods appropriate, or would a purposive, snowball, or quota sample have been better? Has the appropriate sample design been used? What population does the researcher want to draw conclusions about? What is the researcher’s purpose? If it’s statistical description, then rigorous probability sampling methods are called for. If a probability sample has been selected, what sampling frame has been used? Does it appropriately represent the population that interests the researcher? What elements of the population have been omitted from the sampling frame, and what extraneous elements have been included? What specific sampling techniques have been employed: simple random sampling, system- • 475 atic sampling, or cluster sampling? Has the researcher stratified the sampling frame prior to sampling? Have the stratification variables been chosen wisely? That is, are they relevant to the variables under study? How large a sample was selected? What percentage of the sample responded? Are there any likely differences between those who responded and those who didn’t? Even assuming that the respondents are representative of those selected in the sample, what sampling error do you expect from a sample of this size? Has the researcher tested for representativeness: comparing the gender distribution of the population and of respondents, for example, or their ages, ethnicity, education, or income? Ultimately, do the studied individuals (or other units of analysis) represent the larger population from which they were chosen? That is, do conclusions drawn about the sample tell us anything about meaningful populations or about life in general? If probability sampling and statistical representation were not appropriate to the study—in a qualitative study, for example—have subjects and observations been selected in such a way as to provide a broad overview of the phenomenon being examined? Has the researcher paid special attention to deviant or disconfirming cases? Experiments • • • • • What is the primary dependent variable in the experiment? What effect is the experimenter trying to achieve, for example? What is the experimental stimulus? What other variables are relevant to the experiment? Have they been measured? How has each variable been defined and measured? What potential problems of validity and reliability do these definitions and measurements raise? Has a proper control group been used? Have subjects been assigned to the experimental 476 • • • • CHAPTER 15 READING AND WRITING SOCIAL RESEARCH and control groups through random selection or by matching? Has it been done properly? Has the researcher provided any evidence of the initial comparability of experimental and control-group subjects? Have there been pre- and posttest measurements of the dependent variable? What is the chance of a placebo (or “Hawthorne”) effect in the experiment? Has any attention been given to the problem? Does the study employ a double-blind design, for example? Are there any problems of internal invalidity: history, maturation, testing, instrumentation, statistical regression, selection bias, experimental mortality, or demoralization? Are there issues of external invalidity? How has the experimenter ensured that the laboratory findings will apply to life in the real world? Survey Research • • • • • • • Does the study stand up to all the relevant questions regarding sampling? What questions were asked of respondents? What was the precise wording of the questions? Be wary of researcher reports that provide only paraphrases of the questions. If closed-ended questions were asked, were the answer categories provided appropriate, exhaustive, and mutually exclusive? If open-ended questions were asked, how have the answers been categorized? Has the researcher guarded against his or her own bias creeping in during the coding of open-ended responses? Are all the questions clear and unambiguous? Could they have been misinterpreted by respondents? If so, could the answers given mean something other than what the researcher has assumed? Were the respondents capable of answering the questions asked? If not, they may have answered anyway, but their answers might not mean anything. Are any of the questions double-barreled? Look for conjunctions (such as and, or ). Are • • • • respondents being asked to agree or disagree with two ideas, when they might like to agree with one and disagree with the other? Do the questions contain negative terms? If so, respondents may have misunderstood them and answered inappropriately. Is there a danger of social desirability in any of the questions? Is any answer so right or so wrong that respondents may have answered on the basis of what people would think of them? How would you yourself answer each item? As a general rule, test all questionnaire items by asking yourself how you would answer. Any difficulty you might have in answering might also apply to others. Then, try to assume different points of view (for example, liberal and conservative, religious and unreligious) and ask how the questions might sound to someone with each point of view. Has the researcher conducted a secondary analysis of previously collected data? If so, determine the quality of the research that produced the data originally. Also, are the data available for analysis appropriate to the current purposes? Do the questions originally asked reflect adequately on the variables now being analyzed? Field Research • • • • • What theoretical paradigm has informed the researcher’s approach to the study? Has the research set out to test hypotheses or generate theory from the observations? Or is there no concern for theory in the study? What are the main variables in this study? How have they been defined and measured? Do you see any problems of validity? How about reliability? Would another researcher, observing the same events, classify things the same way? Is there any chance that the classification of observations has been influenced by the way those classifications will affect the research findings and/or the researcher’s hypotheses? READING SOCIAL RESEARCH • • • • • • • If descriptive conclusions have been drawn— for example, “the group’s standards were quite conservative”—what are the implicit standards being used? How much can the study’s findings be generalized to a broader sector of society? What claims has the researcher made in this regard? What is the basis for such claims? If people have been interviewed, how were they selected? Do they represent all appropriate types? How much did the researcher participate in the events under study? How might that participation have affected the events themselves? Did the researcher reveal his or her identity as a researcher? If so, what influence could that revelation have had on the behavior of those being observed? Does the research indicate any personal feelings—positive or negative—about those being observed? If so, what effect might these feelings have had on the observations that were made and the conclusions that were drawn from them? How has the researcher’s own cultural identity or background affected the interpretation of what has been observed? Analyzing Existing Statistics • • • • • • • • • What are the key variables in the analysis? Are they appropriate to the research question being asked? What is the source and form of data being analyzed? Are they appropriate to the research questions being asked? Is the time frame of the data being analyzed appropriate to the research question? What is the unit of analysis? If a quantitative analysis has been conducted, (1) has an appropriate sample been selected from the data source and (2) have the appropriate statistical techniques been used? If a qualitative analysis has been conducted, (1) has an appropriate range of data been examined and (2) are the researcher’s conclusions logically consistent with the data presented? Who originally collected the data being reanalyzed? Were there any flaws in the data-collection methods? What was the original purpose of the data collection? Would that have affected the data that was collected? What was the unit of analysis of the data? Is it appropriate to the current research question and the conclusions being drawn? Is there a danger of the ecological fallacy? When were the data collected? Are they still appropriate to present concerns? What are the variables being analyzed in the present research? Were the definitions used by the original researchers appropriate to present interests? Comparative and Historical Research • • • • Content Analysis • 477 • • Is this a descriptive or an explanatory study? Does it involve cross-sectional comparisons or changes over time? What is the unit of analysis in this study (for example, country, social movement)? What are the key variables under study? If it is an explanatory analysis, what causal relationships are examined? Does the study involve the use of other research techniques, such as existing statistics, content analysis, surveys, or field research? Use the guidelines elsewhere in this section to assess those aspects of the study. Is the range of data appropriate to the analysis: for example, the units being compared or the number of observations made for the purpose of characterizing units? If historical or other documents are used as a data source, who produced them and for what purposes? What biases might be embedded in them? Diaries kept by members of the gentry, for example, will not reflect the life of peasants of the same time and country. Evaluation Research • What is the social intervention being analyzed? How has it been measured? Are there any problems of validity or reliability? 478 • • • • CHAPTER 15 READING AND WRITING SOCIAL RESEARCH Have the appropriate people (or other units of analysis) been observed? How has “success” been defined? Where would the success be manifested—in individuals, in organizations, in crime rates? Has it been measured appropriately? Has the researcher judged the intervention a success or a failure? Is the judgment well founded? Who paid for the research, and who actually conducted it? Can you be confident of the researcher’s objectivity? Did the sponsor interfere in any way? • Reporting • • Data Analysis • • • • • • • • Did the purpose and design of the study call for a qualitative or a quantitative analysis? How have nonstandardized data been coded? This question applies to both qualitative and quantitative analysis. To what extent were the codes (1) based on prior theory or (2) generated by the data? Has the researcher undertaken all relevant analyses? Have all appropriate variables been identified and examined? Could the correlation observed between two variables have been caused by a third, antecedent variable, making the observed relationship spurious? Does a particular research finding really matter? Is an observed difference between subgroups, for example, a large or meaningful one? Are there any implications for action? Has the researcher gone beyond the actual findings in drawing conclusions and implications? Are there logical flaws in the analysis and interpretation of data? Have the empirical observations of the study revealed new patterns of relationships, providing the bases for grounded theories of social life? Has the researcher looked for disconfirming cases that would challenge the new theories? Are the statistical techniques used in the analysis of data appropriate to the levels of measurement of the variables involved? If tests of statistical significance were used, have they been interpreted correctly? Has statistical significance been confused with substantive significance? • Has the researcher placed this particular project in the context of previous research on the topic? Does this research add to, modify, replicate, or contradict previous studies? In general, has the researcher reported the details of the study design and execution fully? Are there parts of the report that seem particularly vague or incomplete in the reporting of details? Has the researcher reported any flaws or shortcomings in the study design or execution? Are there any suggestions for improving research on the topic in the future? I hope this section will prove useful to you in reading and understanding social research. The exercises at the end of this chapter will walk you through the reading of two journal articles: one qualitative and one quantitative. As I said earlier, you’ll find that your proficiency in reading social research reports will mature with practice. Before discussing how to go about creating social research reports for others to read, let’s look at how to read and evaluate data from an increasingly popular source of information—the Internet. Using the Internet Wisely In the closing decade of the twentieth century, the World Wide Web developed into a profoundly valuable tool for social research. As it expands exponentially, the web is becoming the mind of humanity, the repository of human knowledge, opinions, and beliefs—carrying with it intellectual insights, misconceptions, and outright bigotry. Clearly, it will continue to evolve as an ever more powerful entity. As with gunpowder and television, the power of the technology does not guarantee that it will always be used wisely. As I write this, a substantial number of faculty still prohibit their READING SOCIAL RESEARCH 479 students from using web materials. I have opted to encourage use of the web rather than opposing it, but I am mindful of the problems that make many of my colleagues more cautious. In this section of the chapter, I share websites useful to social researchers and give some general advice on searching the web. Then I address the major problems inherent in using the web and suggest ways to avoid them. • Some Useful Websites The website associated with this book has up-to-date links to useful social research websites. I’ve placed these materials on the web instead of in an appendix, so they can be revised and updated before the next textbook revision. Nevertheless, I want to mention a few key websites here and, more importantly, offer advice on how to search the web. The first website I’ll mention is the one created to support this textbook and is mentioned at the end of each chapter. You should consider it as an extension of the book: http://sociology.wads worth.com/babbie_basics4e. In addition to tutoring you on this book and coaching you in your research methods course, the website also provides a great many links that will take you to other useful resources to aid you in both learning and doing social research. Here are just a few generally useful websites that you might like to check out: Now, let’s assume you need some information that you suspect is somewhere on the web, but you don’t know where. Here are some ideas about becoming a web detective. • • • • • Computer Assisted Qualitative Data Analysis Software, University of Surrey, England http://caqdas.soc.surrey.ac.uk/ General Social Survey http://webapp.icpsr.umich.edu/GSS/ GSS Resource materials, Queen’s College http://www.soc.qc.edu/QC_Software/GSS .html QUALPAGE: Resources for Qualitative Research http://www.qualitativeresearch.uga .edu/QualPage/ Social Sciences Virtual Library http://www.dialogical.net/socialsciences/ index.html • • • Statistical Resources on the Web, University of Michigan http://www.lib.umich.edu/govdocs/stats.html USA Statistics in Brief http://www.census.gov/compendia/statab/ brief.html U.S. Bureau of the Census http://www.census.gov/ Yahoo Social Sciences http://dir.yahoo.com/Social_Science/ Searching the Web I won’t estimate the number of pages of information on the World Wide Web; its growth curve is so dramatic that any number I might give now would be embarrassingly low by the time you read this. Let’s just say there are millions and millions of pages. Similarly, estimating the number of “facts” or pieces of data on the web would be impossible, but most of the factual questions you might have can be answered on the web. Finding them involves skill, however. Let’s say you want to know who was the thirteenth president of the United States. That’s easily learned in several ways. The most straightforward would be to open one of the many search engines available to you; let’s say you use Google, found at http://www.google.com. When I searched for “thirteenth president,” my responses began with those shown in Figure 15-1. (Realize that if you replicate this procedure, you may get somewhat different responses, because the content of the web is continuously evolving.) The first response in the list gives us the answer: Millard Fillmore. In this case, it’s not even necessary to follow up on any of the web links given, unless we want to know something more about him. Notice that we have the same answer from three search engine A computer program designed to locate where specified terms appear on websites throughout the World Wide Web. 480 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH Millard Fillmore - Thirteenth President of the United States Biographical fast facts about Millard Fillmore, the thirteenth President of the United States. americanhistory.about.com/library/fastfacts/blffpresl3.htm - 28k - Cached - Similar pages Millard Fillmore - Thirteenth President of the United States Learn all about Millard Fillmore, the Thirteenth president of the United States. americanhistory.about.com/od/millardfillmore/ - 16k - Cached - Similar pages Amazon.com: Millard Fillmore: Thirteenth President of the United .. Amazon.com: Millard Fillmore: Thirteenth President of the United States (Encyclopedia of Presidents): Books by Jane Clark Casey. www.amazon.com/exec/obidos/tg/detail/-/051601353X?v=glance - 53k - Cached - Similar pages President Ronald R. Thomas, Thirteenth President of the University .. The University of Puget Sound is honored to welcome Ronald R. Thomas as its thirteenth president, and only its fifth since 1913. .. www2.ups.edu/inauguration/inaugurationPresident.html - 11k - Cached - Similar pages Wilson, Woodrow Wilson, [Thomas] Woodrow (1856-1924), thirteenth president of Princeton, was born December 29, 1856, in Staunton, Virginia, the son of Joseph Ruggles Wilson .. etc. princeton.edu/CampusWWW/Companion/wilson_woodrow.html - 13k - Cached - Similar pages The New ASU Story: Leadership John W. Schwada was ASU’s thirteenth president, serving from 1971 to 1981. He presided over one of the largest growth periods in university history, .. www.asu.edu/lib/archives/asustory/pages/18lead.htm - 4k - Sep 14, 2005 - Cached - Similar pages Millard Fillmore, Thirteenth President Of The United States reference - author, title, language for ISBNO51601353X Millard Fillmore, Thirteenth President Of The United States. my.linkbaton.com/isbn/051601353X - 3k - Cached - Similar pages Thirteenth President of the United States Millard Fillmore - Books .. journal articles on: Thirteenth President of the United States Millard Fillmore .. Take Millard Fillmore, who gave his..money, our thirteenth president. .. www.questia.com/search/Thirteenth-President-of-the-United-States-Millard-Fillmore - 41k - Cached - Similar pages The Library of Congress Shop > Presidents, First Ladies > Millard .. Millard Fillmore, Thirteenth President of the United States. Millard Fillmore, Thirteenth President of the United States Click on image to enlarge .. www.loc.gov/shop/index.php?action=cCatalog.showItem&cid=33&scid=229&iid=1019 - 12k - Sep 14, 2005 - Cached - Similar pages FIGURE 15-1 Finding the “Thirteenth President” ©2005 Google. Downloaded September 15, 2005, 12:25 P.M. different websites—each adding to our confidence that we have the right answer. Notice also that the fourth and fifth answers reflect the ambiguity of our request in not specifying president of “what.” Here’s a more elaborate example. Let’s say you want to examine differences in the infant mortality rates of countries around the world. You may already know some websites that are likely to have that information, but let’s assume you don’t. Go back to Google or another search engine and search for “infant mortality rate.” If you put your request inside quotation marks, as I just did, the search engine will look for that exact phrase instead of reporting websites that happen to have all three words. Figure 15-2 presents the initial results I received. The fourth web link is to the CIA’s World Factbook, a reference that draws on data from a variety of sources. The third is from the United Nations; the others range from government or commercial data sources to news articles. Realize that Figure 15-2 only presents the first few websites returned Map & Graph: Countries by Health: Infant mortality rate Our infant mortality rate is driven by our high accidental death rate and .. But our relatively high infant mortality rate relative to per capita income is .. www.nationmaster.com/graph-T/hea_inf_mor_rat - 99k - Sep 13, 2005 Cached - Similar pages GeographyIQ - World Atlas - Rankings - Infant mortality rate (All .. Worldwide Infant mortality rate (All Ascending) ranking information. www.geographyiq.com/ranking/ranking_Infant_Mortality_Rate_aall.htm - 94k Cached - Similar pages United Nations Statistics Division - Millennium Indicators Indicator. 14. Infant mortality rate (UNICEF-WHO) .. MDG, 1230, Infant mortality rate (0-1 year) per 1000 live births (UNICEF estimates) - View data .. millenniumindicators.un.org/unsd/mi/mi_indicator_xrxx.asp?ind_code=14 - 16k Cached - Similar pages CIA - The World Factbook — Rank Order- Infant mortality Infant mortality rate (deaths/1000 live births). Date of Information. 1. Angola, 187.49, 2005 est. 2. Afghanistan, 163.07, 2005 est. .. www.cia.gov/cia/publications/factbook/rankorder/2091rank.html - 92k Cached - Similar pages State Rankings—Statistical Abstract of the United States—Infant .. rankings of states for infant mortality rate.. INFANT MORTALITY RATE — 2001. [When states share the same rank, the next lower rank is omitted. .. www.census.gov/statab/ranks/rank17.html - 15k - Sep 14, 2005 - Cached - Similar pages Human Development Reports Infant mortality rate (per 1000 live births) The probability of dying between birth and exactly one year of age, expressed per 1000 live births. .. www.undp.org/hdr2003/indicator/indic_289.html - 83k - Cached - Similar pages Health, Cuba Reports Record Low Infant Mortality Rate: Cuba News .. Health, Cuba Reports Record Low Infant Mortality Rate: Cuba News, Cuba Travel, cultural, business news. Cuba Travel eXPlorer. www.cubaxp.com/modules/news/article-447.html - 45k - Cached - Similar pages Infant mortality rate - deaths per 1000 live births - Flags, Maps .. Infant mortality rate - deaths per 1000 live births - Flags, Maps, Economy, Geography, Climate, Natural Resources, Current Issues, International Agreements, .. www.photius.com/wfb1999/rankings/infant_mortality_0.html - 52k - Cached - Similar pages FIGURE 15-2 Search for “Infant Mortality Rate” ©2005 Google. Downloaded September 15, 2005, 12:33 P.M. 482 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH CIA - The World Factbook — Rank Order - Infant mortality rate Top banner The World Factbook Banner .. Infant mortality rate (deaths/1000 live births). Date of Information. 1. Angola, 187.49, 2005 est. .. www.cia.gov/cia/publications/factbook/rankorder/2091rank.html - 92k Cached - Similar pages CIA - The World Factbook — United States Buoyed by victories in World Wars I and II and the end of the Cold War in 1991, .. Infant mortality rate:. Definition - Field Listing - Rank Order .. www.cia.gov/cia/publications/factbook/geos/us.html - 101k - Sep 13, 2005 Cached - Similar pages Global Geografia - World, Demographic statistics: Infant Mortality .. www.globalgeografia.com - Website about geography. www.globalgeografia.com/world/infant_mortality_rate.htm - 6k - Cached - Similar pages GeographyIQ - World Atlas - Rankings - Infant mortality rate (All .. Worldwide Infant mortality rate (All Ascending) ranking information. www.geographyiq.com/ranking/ranking_Infant_Mortality_Rate_aall.htm - 94k - Cached - Similar pages GeographyIQ - World Atlas - Rankings - Infant mortality rate .. Worldwide Infant mortality rate (Bottom 25) ranking information. www.geographyiq.com/ranking/ranking_Infant_Mortality_Rate_bottom25.htm - 28k Cached - Similar pages Infant mortality - Wikipedia, the free encyclopedia World infant mortality rate declined from 198 in 1960 to 83 in 2001. However, IMR remained higher in LDCs. In 2001, the Infant Mortality Rate for Less .. en.wikipedia.org/wiki/Infant_mortality_rate - 20k - Cached - Similar pages List of countries by infant mortality rate (2005) - Wikipedia, the .. This is a list of countries by infant mortality rate, based on The World Factbook, 2005 estimates.[1]. The infant mortality rate (IMR) is reported as number .. en.wikipedia.org/wiki/List_of_countries_by_infant_mortality_rate_(2005) - 35k Cached - Similar pages FIGURE 15-3 Search for “World ‘Infant Mortality Rate’” ©2005 Google. Downloaded September 15, 2005, 12:33 P.M. by the Google search. Google reported that it had found about 1,630,000 websites that seemed to have the information we were seeking. Notice that several of the web links are probably more specific than we want—one deals only with Cuba, another gives data only on the United States. Often an effective web search requires more than one attempt. In this case, I added the word world to the request: world “infant mortality rate.” Like many other search engines, Google interprets this as a request to find websites that contain the word world plus the exact phrase infant mortality rate. Figure 15-3 presents the first set of results. READING SOCIAL RESEARCH This time, the first two web links are to the CIA’s World Factbook. The sixth and seventh links are to Wikipedia, a free encyclopedia compiled by the web community. Although commercial websites and almanacs can be useful sources of information, you should, wherever possible, use data presented by those who collect and compile it. In this case, you might want to search further for links to the respected Population Reference Bureau or to the United Nations sites we saw in Figure 15-2. Conducting this search on your own and visiting the web links that result is a useful exercise. You’ll find that some of the sites are discussions of the topic rather than tables of data. Others present a limited set of data (“selected countries”). Thus, compiling a list of web links like this is a step along the way to obtaining relevant data, but it is not the final step. Evaluating the Quality of Internet Materials You now know enough about web searches to begin learning through experience. You’ll quickly learn that finding data on the web is relatively easy. Evaluating what you’ve found is a bit more difficult, however. I’ve already alluded to the matter of quality, but there’s much more to be said on the topic. In fact, many other people have said many other things about it. What do you suppose is your best source of such advice? If you said, “The web,” you got it. Open up a search engine and ask it to find websites having to do with “evaluating web sites.” (Using alternate spellings can yield more results; for example, you could also enter “evaluating websites” and get a similar yet different set of entries.) Figure 15-4 gives you some idea of the extent of advice available to you. As you can tell from the “.edu” in the addresses of most of these sites, this is a topic of concern for colleges and universities. Although each of the various sites approaches the topic differently, the guidance they offer has some elements in common. You would do well to study one or more of the sites in depth. In the meantime, here’s an overview of the most common questions and suggestions for evaluating the data presented on websites. 483 1. Who/what is the author of the website? The two biggest risks you face in getting information from the web are (1) bias and (2) sloppiness. The democratic beauty of the web is its accessibility to such a large proportion of the population and the lack of censorship. These pluses also present dangers, in that just about anyone can put just about anything on the web. The first thing you should note, therefore, is who the author of the website is: either an organization or an individual. 2. Is the site advocating a particular point of view? Many of the sites on the World Wide Web have been created to support a particular political, religious, nationalistic, social, or other point of view. This fact does not necessarily mean that the data they present are false, though that’s sometimes the case. Beyond outright lying, however, you can be relatively sure that the website will only present data supporting its particular point of view. You can usually tell whether a website is reasonably objective or has an ax to grind, and you should be wary of those that go overboard to convince you of something. 3. Does the website give accurate and complete references? When data are presented, can you tell where they come from—how they were created? If the website is reporting data collected by someone else, are you given sufficient guidance to locate the original researchers? Or, if the data were compiled by the website authors, do they provide you with sufficiently detailed descriptions of their research methods? If data are presented without such clarifications, you should move on. 4. Are the data up-to-date? Another common problem on the web is that materials may be posted and forgotten. Hence, you may find data reporting crime rates, chronicles of peace negotiations, and so forth that are out-of-date. Be sure that the data you obtain are timely for your purposes. 5. Are the data official? It’s often a good idea to find data at official government research sites, such as the Bureau of the Census (http:// 484 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH Evaluating Web Pages: Techniques to Apply & Questions to Ask Includes checklist form (PDF) that can be used to analyze web sites and pages www.lib.berkeley.edu/TeachingLib/Guides/Internet/Evaluate.html - 46k Cached - Similar pages Kathy Schrock’s Guide for Educators - Critical Evaluation Surveys .. .. ..a great site which looks at the different types of pages; Evaluating Web Sites ..a rubric and .. Evaluating Web Sites: What Makes a Web Site Good? .. school.discovery.com/schrockguide/eval.html - 42k - Cached - Similar pages Evaluating Web Sites The User Context: The most important factor when evaluating Web sites is your search, your needs. What are you using the Web for? Entertainment? .. www.library.cornell.edu/olinuris/ref/research/webeval.html - 11k - Sep 13, 2005 Cached - Similar pages Five criteria for evaluating Web pages Evaluation of Web documents, How to interpret the basics. 1. Accuracy of Web Documents. Who wrote the page and can you contact him or her? .. www.library.cornell.edu/okuref/research/webcrit.html - 7k - Cached - Similar pages Evaluating Web Sites Lesley is a multi-site University with more than 150 locations throughout the continental United States. www.lesley.edu/library/guides/research/evaluating_web.html - 25k - Sep 13, 2005 Cached - Similar pages Evaluation Criteria from “The Good, The Bad & The Ugly: or, Why .. A easy to use guide for web evaluation. Lists evaluation criteria with links to actual pages that illustrate each point. The Examples page can be used by .. lib.nmsu.edu/instruction/evalcrit.html - 10k - Cached - Similar pages Evaluating Web Sites for Educational Uses This site contains a list of articles from librarians and other information specialists on Web evaluations. In addition, a checklist for evaluating a Web .. www.unc.edu/cit/guides/irg-49.html - 14k - Cached - Similar pages Evaluating Web Sites for Accessibility Goals for evaluating Web sites vary, and require different approaches to meet those goals:. Preliminary review can:. identify general kinds of barriers on a .. www.w3.org/WAI/eval/Overview.html - 31k - Cached - Similar pages FIGURE 15-4 Search for “Evaluating Web Sites” ©2005 Google. Downloaded September 15, 2005, 12:45 P.M. www.census.gov/), the Bureau of Labor Statistics (http://www.bls.gov/home .htm), the National Center for Health Statistics (http://www.cdc.gov/nchs/), and others. FedStats (http://www.fedstats.gov/) is a good launching point for finding data among some 100 federal research agencies. As we saw in Chapter 11, data presented by official agencies are not necessarily “The Truth,” but they are grounded in a commitment to objectivity and READING SOCIAL RESEARCH have checks and balances to support them in achieving that goal. 6. Is it a university research site? Like government research agencies, university research centers and institutes are usually safe resources, committed to conducting professional research and having checks and balances (such as peer review) to support their achieving that. Throughout this book, I’ve mentioned the General Social Survey (http://webapp.icpsr .umich.edu/GSS/), conducted regularly by the National Opinion Research Center at the University of Chicago. You could use data presented here with confidence: confidence in the legitimacy of the data and confidence that your instructor will not question your use of that resource. 7. Do the data seem consistent with data from other sites? Verify (cross-check) data wherever possible. We’ve already seen that a web search is likely to turn up more than one possible source of data. Take the time to compare what they present. If several websites present essentially the same data, you can use any of those sources with confidence. As with so many things, your effective use of the web will improve with practice. Moreover, the web itself will be evolving alongside your use of it. Citing Internet Materials If you use materials from the web, you must provide a bibliographic citation that allows your reader to locate the original materials—to see them in context. This also protects you from the serious problem of plagiarism, discussed a little later in this chapter. There are many standardized formats for bibliographic citations, such as those established by the Modern Language Association, the American Psychological Association, and the American Sociological Association. Web materials, unfortunately, don’t fit any of those familiar formats. Fortunately, each of these organizations—and many, many others—have risen to the challenge of web citations. If you don’t believe me, go to your favorite search engine and look for “web citations.” You’ll find plenty of guidance. 485 Your instructor may prefer a specific format for web citations. However, here are the elements commonly suggested for inclusion: • • • • The URL or web address. For example, http://www.fedstats.gov/qf/states/50000 .html provides demographic data for comparing Vermont with the United States as a whole. So if I tell you that Vermont grew 8.2 percent during the 1990s, you can go directly to the source of my data. The date and time when the site was accessed. Many, like the one just cited, do not change, but many others do. It may be useful for the reader to know when you visited the site in question. Some editing guides say to include this, whereas others say not to. When in doubt, you would check with your instructor or publisher. It’s usually better to have too much information than too little. If you’re citing text materials, there may very well be an author and title, as well as publishing information. These should be cited the same way you would cite printed materials: for example, John Doe. 2003. “How I Learned to Love the Web.” Journal of Web Worship 5 (3): 22–45. Sometimes, you’ll use the web to read a published journal article, locating it with InfoTrac College Edition or another vehicle. Such materials may be presented in a print format, with page numbers. If so, cite the appropriate page number. Lacking that, you may be able to cite the section where the materials in question appeared. The goal in all this is to help your reader locate the original web materials you’re using. Although you sometimes cannot give a precise location in an article posted to a website, most browsers allow users to search the site for a specified word or phrase and thus locate the materials being cited. URL Web address, typically beginning with “http://”; stands for “uniform resource locator” or “universal resource locator.” 486 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH WRITING SOCIAL RESEARCH Unless research is properly communicated, all the efforts devoted to the various procedures discussed throughout this book will go for naught. This means, first and foremost, that good social reporting requires good English or Spanish or whatever language you use. Whenever we ask the figures “to speak for themselves,” they tend to remain mute. Whenever we use unduly complex terminology or construction, communication suffers. My first advice to you is to read and reread (at approximately three-month intervals) an excellent small book by William Strunk, Jr., and E. B. White, The Elements of Style (1999; see also Birchfield 1998). If you do this faithfully, and if even 10 percent of the contents rub off, you stand a good chance of making yourself understood and your findings appreciated. Next, you need to understand that scientific reporting has several functions. First, your report should communicate a body of specific data and ideas. You should provide those specifics clearly and with sufficient detail to permit an informed evaluation by others. Second, you should view your report as a contribution to the general body of scientific knowledge. While remaining appropriately humble, you should always regard your research report as an addition to what we know about social behavior. Finally, the report should stimulate and direct further inquiry. See the box “Communication Is the Key” for more on the importance of knowing how to read and write well. Some Basic Considerations Despite these general guidelines, different reports serve different purposes. A report appropriate for one purpose might be wholly inappropriate for another. This section deals with some of contexts that affect choices in writing. Audience Before drafting your report, ask yourself who you hope will read it. Normally you should make a distinction between scientists and general IN THE REAL WORLD COMMUNICATION IS THE KEY No matter what you do with your life— whether in social research or some other worthy pursuit—you’re likely to find yourself regularly using the skills discussed in this chapter. When colleges and universities ask employers for suggestions on how we can better prepare graduates, the most common response, regardless of professional field, tends to be the same: Teach them to write. Whatever career you choose, you’ll benefit greatly from the ability to read a body of literature or a set of data and write coherently about it. Moreover, if you’re typical of recent college cohorts, you’re likely to have several different careers. The ability to read and write effectively will serve you well in all of them. readers. If the report is written for the former, you can make certain assumptions about their existing knowledge and therefore summarize certain points rather than explain them in detail. Similarly, you can use more technical language than would be appropriate for a general audience At the same time, remain aware that any science has its factions and cults. Terms, assumptions, and special techniques familiar to your immediate colleagues might only confuse other scientists. The sociologist of religion writing for a general sociology audience, for example, should explain previous findings in more detail than he or she would if addressing an audience of sociologists of religion. Form and Length of Report My comments here apply to both written and oral reports. Each form, however, affects the nature of the report. It’s useful to think about the variety of reports that might result from a research project. To begin, you may wish to prepare a short research note for publication in an academic or technical journal. WRITING SOCIAL RESEARCH Such reports are approximately one to five pages long (typed, double-spaced) and should be concise and direct. In a small amount of space, you can’t present the state of the field in any detail, so your methodological notes must be abbreviated. Basically, you should tell the reader why you feel your findings justify a brief note, then tell what those findings are. Often researchers must prepare reports for the sponsors of their research. These reports can vary greatly in length. In preparing such a report, you should bear in mind your audience—scientific or lay—and their reasons for sponsoring the project in the first place. It’s both bad politics and bad manners to bore the sponsors with research findings that have no interest or value to them. At the same time, it may be useful to summarize how the research has advanced basic scientific knowledge (if it has). Working papers are another form of research reporting. In a large and complex project especially, you’ll find comments on your analysis and the interpretation of your data useful. A working paper constitutes a tentative presentation with an implicit request for comments. Working papers can also vary in length, and they may present all of the research findings of the project or only a portion of them. Because your professional reputation is not at stake in a working paper, feel free to present tentative interpretations that you can’t altogether justify—identifying them as such and asking for evaluations. Many research projects result in papers delivered at professional meetings. Often, these serve the same purpose as working papers. You can present findings and ideas of possible interest to your colleagues and ask for their comments. Although the length of such professional papers varies, depending on the organization of the meetings, it’s best to say too little rather than too much. Although a working paper may ramble somewhat through tentative conclusions, conference participants should not be forced to sit through an oral unveiling of the same. Interested listeners can always ask for more details later, and uninterested ones can gratefully escape. 487 Probably the most popular research report is the article published in an academic journal. Again, lengths vary, and you should examine the lengths of articles previously published by the journal in question. As a rough guide, however, 25 typed pages is a good length. A subsequent section on the organization of the report is based primarily on the structure of a journal article, so I’ll say no more at this point except to indicate that student term papers should follow this model. As a general rule, a term paper that would make a good journal article also makes a good term paper. A book, of course, represents the most prestigious form of research report. It has the length and detail of a working paper but is more polished. Because publishing research findings as a book lends them greater substance and worth, you have a special obligation to your audience. Although some colleagues may provide comments, possibly leading you to revise your ideas, other readers may be led to accept your findings uncritically. Aim of the Report Earlier in this book, we considered the different purposes of social research projects. In preparing your report, keep these different purposes in mind. Some reports focus primarily on the exploration of a topic. As such, their conclusions are tentative and incomplete. If you’re writing this sort of report, clearly indicate to your audience the exploratory aim of the study and present the shortcomings of the particular project. An exploratory report points the way to more-refined research on the topic. Most research reports have a descriptive element reflecting the descriptive purpose of the studies they document. In yours, carefully distinguish those descriptions that apply only to the sample and those that apply to the population. Give your audience some indication of the probable range of error in any inferential descriptions you make. Many reports have an explanatory aim: pointing to causal relationships among variables. Depending on your probable audience, carefully delineate the rules of explanation that lie behind your computations and conclusions. Also, as in the case 488 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH of description, give your readers some guide to the relative certainty of your conclusions. If your intention is to test a hypothesis based in theory, you should make that hypothesis clear and succinct. Specify what will constitute acceptance or rejection of the hypothesis and how either of those reflects on the theoretical underpinnings. Finally, some research reports propose action. For example, if you’ve studied prejudice, you may suggest in your report how prejudice can be reduced on the basis of your research findings. This suggestion may become a knotty problem for you, however, because your values and orientations may have interfered with your proposals. Although it’s perfectly legitimate for such proposals to be motivated by personal values, you must ensure that the data actually warrant the specific actions you’ve proposed. Thus, you should be especially careful to spell out the logic by which you move from empirical data to proposed action. Organization of the Report Although the various forms and purposes of reports somewhat affect the way they are organized, knowing a general format for presenting research data can be helpful. The following comments apply most directly to a journal article, but with some modification they apply to most forms of research reports as well. Purpose and Overview It’s always helpful if you begin with a brief statement of the purpose of the study and the main findings of the analysis. In a journal article, as we’ve seen, this overview sometimes takes the form of an abstract. Some researchers find this difficult to do. For example, your analysis may have involved considerable detective work, with important findings revealing themselves only as a result of imaginative deduction and data manipulation. You may wish, therefore, to lead the reader through the same exciting process, chronicling the discovery process with a degree of suspense and surprise. To the ex- tent that this form of reporting gives an accurate picture of the research process, it has considerable instructional value. Nevertheless, many readers may not be interested in following your entire research account, and not knowing the purpose and general conclusions in advance may make it difficult for them to understand the significance of the study. An old forensic dictum says, “Tell them what you’re going to tell them; tell them; and tell them what you told them.” You would do well to follow this dictum. Review of the Literature Next, you must indicate where your report fits into the general body of scientific knowledge. After presenting the general purpose of your study, you should bring the reader up-to-date on the previous research in the area, pointing to general agreements and disagreements among the previous researchers. Your review of the literature should lay the groundwork for your own study, showing why your research may have value in the larger scheme of things. In some cases, you may wish to challenge previously accepted ideas. Carefully review the studies that have led to the acceptance of those ideas, then indicate the factors that have not been previously considered or the logical fallacies present in the previous research. When you’re concerned with resolving a disagreement among previous researchers, you should summarize the research supporting one view, then summarize the research supporting the other, and finally suggest the reasons for the disagreement. Your review of the literature serves a bibliographic function for readers by indexing the previous research on a given topic. This can be overdone, however, and you should avoid an opening paragraph that runs three pages, mentioning every previous study in the field. The comprehensive bibliographic function can best be served by a bibliography at the end of the report, and the review of the literature should focus only on those studies that have direct relevance to the present one. WRITING SOCIAL RESEARCH Avoiding Plagiarism Whenever you’re reporting for initiating the serious study of this important topic. But what Parkinson failed to perceive, we now enunciate—the general systems analogue of Parkinson’s Law. on the work of others, you must be clear about who said what. That is, you must avoid plagiarism: the theft of another’s words and/or ideas—whether intentional or accidental—and the presentation of those words and ideas as your own. Because this is a common and sometimes unclear problem for college students, especially in regard to the review of the literature, we’ll consider the issue here. Realize, of course, that these concerns apply to everything you write. Here are the ground rules regarding plagiarism: • • • You cannot use another writer’s exact words without using quotation marks and giving a complete citation, which indicates the source of the quotation such that your reader could locate the quotation in its original context. As a general rule, taking a passage of eight or more words without citation is a violation of federal copyright laws. It’s also not acceptable to edit or paraphrase another’s words and present the revised version as your own work. Finally, it’s not even acceptable to present another’s ideas as your own—even if you use totally different words to express those ideas. The following examples should clarify what is or is not acceptable in the use of another’s work. The Original Work Laws of Growth Systems are like babies: once you get one, you have it. They don’t go away. On the contrary, they display the most remarkable persistence. They not only persist; they grow. And as they grow, they encroach. The growth potential of systems was explored in a tentative, preliminary way by Parkinson, who concluded that administrative systems maintain an average growth of 5 to 6 percent per annum regardless of the work to be done. Parkinson was right so far as he goes, and we must give him full honors 489 The System Itself Tends to Grow At 5 To 6 Percent Per Annum Again, this Law is but the preliminary to the most general possible formulation, the Big-Bang Theorem of Systems Cosmology. Systems Tend To Expand To Fill The Known Universe— (GALL 1975:12–14) Now let’s look at some of the acceptable ways you might make use of Gall’s work in a term paper. • • • Acceptable: John Gall, in his work Systemantics, draws a humorous parallel between systems and infants: “Systems are like babies: once you get one, you have it. They don’t go away. On the contrary, they display the most remarkable persistence. They not only persist; they grow.”* Acceptable: John Gall warns that systems are like babies. Create a system and it sticks around. Worse yet, Gall notes, systems keep growing larger and larger.** Acceptable: It has also been suggested that systems have a natural tendency to persist, even grow and encroach — (Gall 1975:12). Note that the last format requires that you give a complete citation in your bibliography, as I do in this book. Complete footnotes or endnotes work as well. See the publication manuals of various organizations such as the APA or the ASA, as well as the Chicago Manual of Style, for appropriate citation formats. plagiarism Presenting someone else’s words or thoughts as though they were your own, constituting intellectual theft. *John Gall, Systemantics: How Systems Work and Especially How They Fail (New York: Quadrangle, 1975), 12. **Ibid. 490 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH Here now are some unacceptable uses of the same material, reflecting some common errors. • • • Unacceptable: In this paper, I want to look at some of the characteristics of the social systems we create in our organizations. First, systems are like babies: once you get one, you have it. They don’t go away. On the contrary, they display the most remarkable persistence. They not only persist; they grow. [It’s unacceptable to quote someone else’s materials directly without using quotation marks and giving a full citation.] Unacceptable: In this paper, I want to look at some of the characteristics of the social systems we create in our organizations. First, systems are a lot like children: once you get one, it’s yours. They don’t go away; they persist. They not only persist, in fact: they grow. [It’s unacceptable to edit another’s work and present it as your own.] Unacceptable: In this paper, I want to look at some of the characteristics of the social systems we create in our organizations. One thing I’ve noticed is that once you create a system, it never seems to go away. Just the opposite, in fact: systems have a tendency to grow. You might say systems are a lot like children in that respect. [It’s unacceptable to paraphrase someone else’s ideas and present them as your own.] Each of the preceding unacceptable examples is an example of plagiarism and represents a serious offense. Admittedly, there are some “gray areas.” Some ideas are more or less in the public domain, not “belonging” to any one person. Or you may reach an idea on your own that someone else has already put in writing. If you have a question about a specific situation, discuss it with your instructor in advance. I’ve discussed this topic in some detail because, although you must place your research in the context of what others have done and said, the improper use of their materials is a serious offense. Learning to avoid plagiarism is a part of your “coming of age” as a scholar. Study Design and Execution A research report containing interesting findings and conclusions will frustrate readers if they can’t determine the methodological design and execution of the study. The worth of all scientific findings depends heavily on the manner in which the data were collected and analyzed. In reporting the design and execution of a survey, for example, always include the following: the population, the sampling frame, the sampling method, the sample size, the data-collection method, the completion rate, and the methods of data processing and analysis. Comparable details should be given if other methods are used. The experienced researcher can report these details in a rather short space, without omitting anything required for the reader’s evaluation of the study. Analysis and Interpretation Having set the study in the perspective of previous research and having described the design and execution of it, you should then present your data. This chapter momentarily will provide further guidelines in this regard. For now, a few general comments are in order. The presentation of data, the manipulation of those data, and your interpretations should be integrated into a logical whole. It frustrates the reader to discover a collection of seemingly unrelated analyses and findings with a promise that all the loose ends will be tied together later in the report. Every step in the analysis should make sense at the time it is taken. You should present your rationale for a particular analysis, present the data relevant to it, interpret the results, and then indicate where that result leads next. Summary and Conclusions According to the forensic dictum mentioned earlier, summarizing the research report is essential. Avoid reviewing every specific finding, but review all the significant ones, pointing once more to their general significance. The report should conclude with a statement of what you have discovered about your subject matter and where future research might be directed. Many journal articles end with the statement, “It WRITING SOCIAL RESEARCH is clear that much more research is needed.” This conclusion is probably always true, but it has little value unless you can offer pertinent suggestions about the nature of that future research. You should review the particular shortcomings of your own study and suggest ways those shortcomings might be avoided. Guidelines for Reporting Analyses The presentation of data analyses should provide a maximum of detail without being cluttered. You can accomplish this best by continually examining your report to see whether it achieves the following aims. If you’re using quantitative data, present them so the reader can recompute them. In the case of percentage tables, for example, the reader should be able to collapse categories and recompute the percentages. Readers should receive sufficient information to permit them to compute percentages in the table in the direction opposite from that of your own presentation. Describe all aspects of a quantitative analysis in sufficient detail to permit a secondary analyst to replicate the analysis from the same body of data. This means that he or she should be able to create the same indexes and scales, produce the same tables, arrive at the same regression equations, obtain the same factors and factor loadings, and so forth. This will seldom be done, of course, but if the report allows for it, the reader will be far better equipped to evaluate the report than if it does not. Provide details. If you’re doing a qualitative analysis, you must provide enough detail that your reader has a sense of having made the observations with you. Presenting only those data that support your interpretations is not sufficient; you must also share those data that conflict with the way you’ve made sense of things. Ultimately, you should provide enough information that the reader might reach a different conclusion than you did— though you can hope your interpretation will make the most sense. The reader, in fact, should be in position to replicate the entire study independently, 491 whether it involves participant observation among heavy-metal groupies, an experiment regarding jury deliberation, or any other study format. Recall that replicability is an essential norm of science. A single study does not prove a point; only a series of studies can begin to do so. And unless studies can be replicated, there can be no meaningful series of studies. Integrate supporting materials. I have previously mentioned the importance of integrating data and interpretations in the report. Here is a more specific guideline for doing this. Tables, charts, and figures, if any, should be integrated into the text of the report—appearing near that portion of the text discussing them. Sometimes students describe their analyses in the body of the report and place all the tables in an appendix. This procedure greatly impedes the reader, however. As a general rule, it is best to (1) describe the purpose for presenting the table, (2) present it, and (3) review and interpret it. Draw explicit conclusions. Although research is typically conducted for the purpose of drawing general conclusions, you should carefully note the specific basis for such conclusions. Otherwise you may lead your reader into accepting unwarranted conclusions. Point to any qualifications or conditions warranted in the evaluation of conclusions. Typically, you know best the shortcomings and tentativeness of your conclusions, and you should give the reader the advantage of that knowledge. Failure to do so can misdirect future research and result in a waste of research funds. As I said at the outset of this discussion, research reports should be written in the best possible literary style. Writing lucidly is easier for some people than for others, and it’s always harder than writing poorly. You are again referred to the Strunk and White book. Every researcher would do well to follow this procedure: Write. Read Strunk and White. Revise. Reread Strunk and White. Revise again. This will be a difficult and time-consuming endeavor, but so is science. A perfectly designed, carefully executed, and brilliantly analyzed study will be altogether worth- 492 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH less unless you can communicate your findings to others. This chapter has attempted to provide some guidelines toward that end. The best guides are logic, clarity, and honesty. Ultimately, there is no substitute for practice. Going Public Though I have written this chapter with a particular concern for the research projects you may be called on to undertake in your research methods course, you should realize that graduate and even undergraduate students are increasingly presenting the results of their research as professional papers or published articles. If you would like to explore these possibilities further, you may find state and regional associations to be more open to students than are national associations, although students may present papers to the American Sociological Association, for example. Some associations have special sessions and programs for student participants. You can learn more about these possibilities by visiting the associations’ websites to learn of upcoming meetings and the topics for which papers are being solicited. Typically, you’ll submit your paper to someone who has agreed to organize a session with three to five papers on a particular topic. The organizer chooses which of the submissions will be accepted for presentation. Oral presentations at scholarly meetings are typically 15–20 minutes long, with the possibility of questions from the audience. Some presenters read a printed paper, whereas others speak from notes. Increasingly, presenters use computer slide shows, though such presentations are still in the minority. To publish an article in a scholarly journal, you would do well to identify a journal that publishes articles on the topic of your research. Again, the journals published by state or regional associations may be the most accessible to student authors. Each journal will contain instructions for submitting articles, including instructions for formatting your article. Typically, articles submitted to a jour- WHAT DO YOU THINK? REVISITED There is a vast amount of information available on the Internet, but it is not all equally trustworthy and usable in scholarly research. The chapter suggested guidelines for sorting the wheat from the chaff. For example, data provided on government websites or on those of university research centers and institutes, although not perfect, are usually dependable. Clarity of how data were collected is a good sign; so are clear citations to any other sources used. Be wary of websites that push a particular point of view or agenda. Their data may be valid and useful, but caution is in order. Never trust websites that are ambiguous about the methods used or about the exact meanings of variables reported on. Finally, look for agreement across several websites, if possible. In some cases, you may find SourceWatch (http://www.sourcewatch.org/) a useful tool to help you judge the trustworthiness of web sources. Sometimes, you will find that a “research team” is actually a public relations firm or that an individual “expert” always seems to report findings in support of a particular company or industry. nal are circulated among three or so anonymous reviewers, who make comments and recommendations to the journal’s editor. This is referred to as the “peer review” process. Sometimes manuscripts are accepted pretty much as submitted, some are returned for revision and resubmission, and still others are rejected. The whole process from submission to a decision to publish or reject may take a few months, and there will be a further delay before the article is actually published. To meet the costs of publication, a journal will sometimes require that authors pay a small fee on acceptance. Typically, authors receive extra copies MAIN POINTS 493 I’ve already commented on some ethical issues involved in writing research reports. However, there are also some ethical issues at play in terms of reading the research literature. There has always been the risk of reviewing the literature with a special eye toward reports that support a point of view you may be fond of. Although wonderful in most respects, the power of the Internet to provide fast and expansive searches can allow more “cherry picking” of supportive research literature. This places an ever greater burden on researchers to exercise professional honesty in representing the history of research findings in a particular area. Because this chapter concludes the main body of the book, I hope this final section makes clear that research ethics constitute not merely a nice thing to consider as long as it doesn’t get in the way, but a fundamental component of social science. Research ethics has not always been recognize in this fashion. When I first began writing this textbook, there was some objection to including this topic. It wasn’t so much that researchers objected to the ethical treatment of subjects—ethics simply wasn’t considered a proper topic for a book like this one. Attitudes have changed substantially over the years, however. I hope you benefit from understanding the crucial role of ethics in your work as well as in your life. This chapter, and indeed this book, has provided what I hope will be a springboard for you to engage in and enjoy the practice of social research. The next time you find yourself pondering the cause of prejudice, or observing a political rally, or just plain curious about the latest trends in television, I trust you’ll have the tools to explore your world with a social scientific eye. Main Points ❏ Readers of social science literature should form questions as they go along and take notes. ❏ The key elements to note in reading a research report include theoretical orientation, research design, measurement methods, sampling (if any), and other considerations specific to the several data-collection methods discussed in this book. ❏ The Internet is a powerful tool for social researchers, but it also carries risks. of their article—called “reprints”—to give to friends and family and to satisfy requests from professional colleagues. THE ETHICS OF READING AND WRITING SOCIAL RESEARCH Introduction ❏ Meaningful scientific research is inextricably wed to communication; knowing how to read and write it requires practice. Reading Social Research ❏ Social researchers can access many resources, including the library and the Internet, for organizing a review of the literature. ❏ Reading scholarly literature is different from reading other works, such as novels. ❏ Not everything you read on the web is necessarily true. ❏ In reading scholarly literature, one should begin by reading the abstract, skimming the piece, and reading the conclusion to get a good sense of what it is about. ❏ Original sources of data are preferred over those that take data from elsewhere. ❏ In evaluating a web source, one should ask the following: 494 CHAPTER 15 READING AND WRITING SOCIAL RESEARCH Who/what is the author of the website? Is the site advocating a particular point of view? Does the site give accurate and complete references? Are the data up-to-date? The Ethics of Reading and Writing Social Research ❏ A review of the literature should not be biased toward a particular point of view. ❏ Research ethics is a fundamental component of social science, not a nice afterthought. ❏ Official data are usually a good source, although they are subject to error. Key Terms ❏ The reader of a report should verify (crosscheck) data wherever possible. abstract search engine plagiarism URL ❏ Web citations, like other bibliographic references, should be complete—allowing the reader to locate and review the materials cited. Writing Social Research ❏ Good social research writing begins with good writing, which means, among other things, writing to communicate rather than to impress. ❏ Being mindful of one’s audience and one’s purpose in writing the report is important. ❏ Avoiding plagiarism—that is, presenting someone else’s words or thoughts as though they were one’s own—is essential. Whenever using someone else’s exact words, writers must be sure to use quotation marks or some other indication that they are quoting. In paraphrasing someone else’s words or ideas, writers must provide a full bibliographic citation of the source. ❏ The research report should include an account of the study design and execution. ❏ The analysis of a report should be clear at each step, and its conclusion should be specific but not overly detailed. ❏ To write good reports, researchers need to provide details, integrate supporting materials, and draw explicit conclusions. ❏ Increasingly, students are presenting papers at professional meetings and publishing articles in scholarly journals. research monograph Review Questions and Exercises 1. Analyze a quantitative research report: Stanley Lieberson, Susan Dumais, Shyon Baumann, “The Instability of Androgynous Names: The Symbolic Maintenance of Gender Boundaries,” American Journal of Sociology 105 (5, March 2000): 1249 (can be accessed in print or online through InfoTrac College Edition, for example). Use the following questions as your guide: a. What are the theoretical underpinnings of the study? b. How are some of the key variables such as androgynous, racial, and gender segregation conceptualized and operationalized? c. What data is this research based on? d. Are there controlling variables? e. What is the unit of analysis? f. What type of analysis was done? g. What did the authors find? h. What are the strengths and weaknesses in this study? 2. Analyze a qualitative research report: Dingxin Zhao, “State-Society Relations and the Discourses and Activities of the 1989 Beijing Student Movement,” American Journal of Sociology 105 (6, May 2000): 1592 (can be accessed in print or online through InfoTrac ADDITIONAL READINGS College Edition, for example). Use the following questions as your guide: a. What is the author’s main research question? b. What theoretical frameworks does he refer to, and which ones did he use? c. What methodology is the author using? What type of data collection did he choose? What is the unit of analysis? d. Does the author have an hypothesis? If so, what is it? e. How does the author conceptualize key terms such as state, state-society, traditionalism? What new ideal types of states does he bring to the field? 495 Website for The Basics of Social Research 4th edition At the book companion website (http://sociology .wadsworth.com/babbie_basics4e) you will find many resources in addition to ThomsonNow to aid you in studying for your exams. For example, you will find Tutorial Quizzes with feedback, Internet Exercises, Flashcards, and Chapter Tutorials, as well as Extended Projects, InfoTrac College Edition search terms, Social Research in Cyberspace, GSS Data, Web Links, and primers for using various data analysis software such as SPSS and NVivo. f. What are his findings? g. What is the significance of this study? Were you convinced by the author, or do you see weaknesses in the study? Additional Readings Alexander, Jan, and Marsha Ann Tate. 1999. Web Wisdom. Mahwah, NJ: Erlbaum. A guide to the evaluation of web materials. Online Study Resources Birchfield, R. W. 1998. The New Fowler’s Modern English Usage. 3rd ed. New York: Oxford University Press. H. W. Fowler’s concise and witty Modern English Usage has been the chief resource and final word on “proper” English since it was first published in 1926. The third edition ensures that the advice is Go to http://sociology.wadsworth.com/babbie_basics4e and click on ThomsonNow for access to this powerful online study tool. You will get a personalized study plan based on your responses to a diagnostic pretest. Once you have mastered the material with the help of interactive learning tools, you can take a posttest to confirm that you have successfully completed this chapter. “modern.” Strunk, William, Jr., and E. B. White. 1999. The Elements of Style. 4th ed. New York: Macmillan. This marvelous little book provides specific guidance as to grammar and spelling, but it’s primary power is its ability to inspire good writing. Walker, Janice R., and Todd Taylor. 1998. The Columbia Guide to Online Style. New York: Columbia University Press. A guide to citing web materials in a scholarly report. This page intentionally left blank APPENDIXES A Using the Library B Random Numbers C Distribution of Chi Square D Normal Curve Areas E Estimated Sampling Error 497 APPENDIX A Using the Library INTRODUCTION umes exist to offer a guide to the information that’s available. We live in a world filled with social science research reports. Our daily newspapers, magazines, professional journals, alumni bulletins, club newsletters—virtually everything we pick up to read— may carry reports dealing with a particular topic. For formal explorations of a topic, of course, the best place to start is still a good college or university library. Today, there are two major approaches to finding library materials: the traditional paper system and the electronic route. Because I don’t know what will be available to you, we’ll begin with the traditional method and then examine the electronic option. Books in Print This volume lists all the books currently in print in the United States—listed separately by author and by title. Out-of-print books can often be found in older editions of Books in Print. GETTING HELP Readers’ Guide to Periodical Literature This annual volume with monthly updates lists articles published in many journals and magazines. Because the entries are organized by subject matter, this is an excellent source for organizing your reading on a particular topic. Figure A-1 presents a sample page from the Readers’ Guide. In addition to these general reference volumes, you’ll find a great variety of specialized references. Here are just a few: • When you want to find something in the library, your best friend is the reference librarian, who is specially trained to find things in the library. Some libraries have specialized reference librarians—for the social sciences, humanities, government documents, and so forth. Find the librarian who specializes in your field. Make an appointment. Tell the librarian what you’re interested in. He or she will probably put you in touch with some of the many available reference sources. • • • • • • • • • • • REFERENCE SOURCES • • • You’ve probably heard the expression “information explosion.” Your library is one of the main battlefields. Fortunately, a large number of reference vol- 498 • • • Sociological Abstracts Psychological Abstracts Social Science Index Social Science Citation Index Popular Guide to Government Publications New York Times Index Facts on File Editorial Research Reports Business Periodicals Index Monthly Catalog of Government Publications Public Affairs Information Service Bulletin Education Index Applied Science and Technology Index A Guide to Geographic Periodicals General Science Index Biological and Agricultural Index Nursing and Applied Health Index Nursing Studies Index Text not available due to copyright restrictions 500 APPENDIX A USING THE LIBRARY 1 2 3 7 ADOLESCENCE 301.43 Eagan, Andrea Boroff E Why am I so miserable if these are the best years of my life? A survival guide for the young woman; with an introduction by Ellen Frankfort. Lippincott 1976 251p illus 6 4 5 FIGURE A-2 Sample Subject Catalog Card Source: Lilian L. Shapiro, Teaching Yourself in Libraries (New York: H. W. Wilson, 1978), 3–4. Used by permission. • • • • • • Index to Little Magazines Popular Periodical Index Biography Index Congressional Quarterly Weekly Report Library Literature Bibliographic Index USING THE STACKS Serious research usually involves using the stacks, where most of the library’s books are stored. This section provides information about finding books in the stacks. The Card Catalog In the traditional paper system, the card catalog is the main reference system for finding out where books are stored. Each book is described on three separate 3-by-5 cards. The cards are then filed in three alphabetical sets. One set is arranged by author, another by title, and the third by subject matter. If you want to find a particular book, you can look it up in either the author file or the title file. If you have only a general subject area of interest, you should thumb through the subject catalog. Figure A-2 presents a sample card in the card catalog. Notice the following elements: 1. 2. 3. 4. 5. 6. Subject heading (always in capital letters) Author’s name (last name, first name) Title of the book Publisher Date of publication Number of pages in the book plus other information (such as whether the book contains illustrations) 7. Call number needed to find a nonfiction book on the library shelves; fiction is generally found in alphabetical order by the author’s name Library of Congress Classification Here’s a useful strategy to use when you’re researching a topic. Once you’ve identified the call number for a particular book in your subject area, go to the stacks, find that book, and look over the other books on the shelves near it. Because the books are arranged by subject matter, this method COMPUTERIZED LIBRARY FILES will help you locate relevant books you didn’t know about. Alternatively, you may want to go directly to the stacks and look at books in your subject area. In most libraries, books are arranged and numbered according to a subject matter classification developed by the Library of Congress. (Some follow the Dewey decimal system.) The following is a selected list of Library of Congress categories. Library of Congress Classifications (partial) A B C D E-F G H J GENERAL WORKS PHILOSOPHY, PSYCHOLOGY, RELIGION B-BD Philosophy BF Psychology BL-BX Religion HISTORY-AUXILIARY SCIENCES HISTORY (except America) DA-DR Europe DS Asia DT Africa HISTORY (America) E United States E51–99 Indians of North America E185 Negroes in the United States F101–1140 Canada F1201–3799 Latin America GEOGRAPHY-ANTHROPOLOGY G-GF Geography GC Oceanology and oceanography GN Anthropology GV Sports, amusements, games SOCIAL SCIENCES H62.B2 The Basics of Social Research HB-HJ Economics and business HM-HX Sociology POLITICAL SCIENCE JK United States JN Europe JQ Asia, Africa JX International relations K L M N P Q R S T U V Z 501 LAW EDUCATION MUSIC FINE ARTS NA Architecture NB Sculpture NC Graphic arts ND Painting NE Engraving NK Ceramics, textiles LANGUAGE AND LITERATURE RE English language PG Slavic language PJ-PM Oriental language PN Drama, oratory, journalism PQ Romance literature PR English literature PS American literature PT Germanic literature SCIENCE QA Mathematics QB Astronomy QC Physics QD Chemistry QE Geology QH-QR Biology MEDICINE RK Dentistry RT Nursing AGRICULTURE—PLANT AND ANIMAL INDUSTRY TECHNOLOGY TA-TL Engineering TR Photography MILITARY SCIENCE NAVAL SCIENCE BIBLIOGRAPHY AND LIBRARY SCIENCE COMPUTERIZED LIBRARY FILES Increasingly, library materials are catalogued electronically. While there are different computerized library systems, here’s a typical example of how they work. Sitting at a computer terminal—in the library, at a computer lab, or at home—you can type the 502 APPENDIX A USING THE LIBRARY AU Kinloch-Graham-C. Tl The Changing Definition and Content of Sociology in Introductory Textbooks, 1894–1981. SO International Review of Modern Sociology. 1984, 14, 1, spring, 89–103. DE Sociology-Education; (D810300). Textbooks; (D863400). AB An analysis of 105 introductory sociology textbooks published between 1894 & 1981 reveals historical changes in definitions of the discipline & major topics in relation to professional factors & changing societal contexts. Predominant views of sociology in each decade are discussed, with the prevailing view being that of a “scientific study of social structure in order to decrease conflict & deviance, thereby increasing social control.” Consistencies in this orientation over time, coupled with the textbooks’ generally low sensitivity to social issues, are explored in terms of their authors’ relative homogeneity in age & educational backgrounds. 1 Table, 23 References. Modified HA. FIGURE A-3 A Research Summary from Sociological Abstracts title of a book and in seconds see a video display of a catalog card. If you want to explore the book further, you can type an instruction at the terminal and see an abstract of the book. Alternatively, you might type a subject name and see a listing of all the books and articles written on that topic. You could skim through the list and indicate which ones you want to see. Many libraries today provide access to periodicals and books via the World Wide Web. Your library’s computerized system should allow you to see which materials are available online. Sometimes whole dissertations or books can be downloaded. Most likely, your largest local library provides document delivery services to its members. Many college libraries now have access to the Educational Resources Information Center (ERIC). This computer-based system allows you to search through hundreds of major educational journals to find articles published in the subject area of your interest (within the field of education). As a rule, each library website should have a list of the databases by discipline that you can visit, which may help you limit the number of titles related to a specific keyword. Make sure you narrow your search by limiting, for instance, language or period of the publication. Once you identify the articles you’re interested in, the computer will print out their abstracts. Of particular value to social science researchers, the publications Sociological Abstracts and Psychological Abstracts present summaries of books and articles—often prepared by the origi- nal authors—so that you can locate a great many relevant references easily and effectively. As you find relevant references, you can track down the original works and see the full details. The summaries are available in both written and computerized forms. Figure A-3 contains the abstract of an article obtained in a computer search of Sociological Abstracts. I began by asking for a list of articles dealing with sociology textbooks. After reviewing the list, I asked to see the abstracts of each of the listed articles. Here’s an example of what I received seconds later: an article by the sociologist Graham C. Kinloch, published in the International Review of Modern Sociology. In case the meaning of the abbreviations in Figure A-3 isn’t immediately obvious, I should explain that AU is author; TI is title; SO is the source or location of the original publication; DE indicates classification codes under which the abstract is referenced; and AB is the abstract. The computerized availability of resources such as Sociological Abstracts provides a powerful research tool for modern social scientists. You’ll have the option to download or print, with or without the abstract, any title you find through the library’s browsers. If a document is not available in the library itself or via the web, you always have the resource of interlibrary loans, which often are free. Libraries don’t own every document or multimedia material (CD-ROMs, videocassettes, DVDs, films), but many have loan agreements that can serve your needs. You need to be aware of the time you can FULL-TEXT ONLINE RESOURCES 503 FIGURE A-4 Opening Screen for InfoTrac College Edition expect between your request and actually receiving the book or article. In the case of a book that is located in another library close by, for example, it may be faster for you to get it directly yourself. The key to a good library search is to become well informed. So start networking with librarians, faculty, and peers! FULL-TEXT ONLINE RESOURCES Today, it’s often possible to get the full text of articles online, and this capacity is growing in leaps and bounds. InfoTrac College Edition is just one of the many ways you can do this. You may have received a subscription to this service with the purchase of this book. If not, your school library may have an institutional subscription that will give you full access. Once you have connected with InfoTrac College Edition, your computer screen should look something like Figure A-4. You have several options for initiating your search. If you’re looking for a particular article, you might enter the title (or a substantial part of it) in the box. If you’re looking for a particular author, enter his or her name in the box. Or you can enter a topic that you are interested in researching. Let’s say you enter “capital punishment” in the box. Figure A-5 shows what I got in response. (By the time you try this, there may be different articles in the database.) As you can see, there are more than a couple of articles on capital punishment. For most research purposes, periodicals provide a better source than do newspapers, but sometimes newspapers are best. Let’s say you’re doing research on capital punishment for juveniles, and you can see that there are 100 periodical articles on that topic. Figure A-6 presents a portion of what you should see if you click the appropriate “View” button. (Again, the articles may have changed by the time you try this.) Finally, let’s take a look at the first of these articles, to see if it would be appropriate for your paper on capital punishment. Figure A-7 presents part of this article. Full-text retrieval services such as InfoTrac College Edition offer you a powerful research tool. However, let me close this discussion with two cautionary comments. First, not every article resulting from a search like this will be appropriate to your research purposes, so you may have to read through a great 504 APPENDIX A USING THE LIBRARY FIGURE A-5 Index of Articles on Capital Punishment FIGURE A-6 Articles on Capital Punishment for Juveniles many articles to find what you need. My own, professorial bias is that tools such as this should not require less effort on your part than other types of searches will do but that they will yield a far superior product from the same amount of effort. Second, when using bibliographic materials like these, it’s important to remember our earlier discussion of plagiarism. Be sure to quote and cite appropriately. A failure to do is, at best, sloppy and, at worst, dishonest. ADDITIONAL READINGS 505 FIGURE A-7 A Portion of an Article on Juvenile Capital Punishment With those two cautions, I encourage you to make full use of these kinds of bibliographic resources. It’s also OK to poke around the files out of idle curiosity. You never know what you might find. ment and nongovernment sources of data. Special section on sex roles and women’s studies. Li, Tze-chung. 2000. Social Science Reference Sources: A Practical Guide. Westport, CT: Greenwood Press. Lists and describes all types of reference materials, including databases and archives as well as published Additional Readings sources. Organized into two parts: social sciences in general and by discipline. Bart, Pauline, and Linda Frankel. 1986. The Student Richlin-Klonsky, Judith, and Ellen Strenski, eds. 1998. Sociologist’s Handbook. New York: Random House. A A Guide to Writing Sociology Papers. New York: St. survival kit for doing sociological research. Contains Martin’s Press. This is a great little book with good a step-by-step guide for writing research papers; advice on doing research. It’s particularly useful for chapters on periodicals, abstract and indexing ser- those who are new to sociology or other social sci- vices, bibliographies, bibliographical aids, and other ence disciplines and have to learn about the most secondary sources; and a complete guide to govern- rudimentary aspects of research. APPENDIX B Random Numbers 10480 22368 24130 42167 37570 15011 46573 48360 93093 39975 01536 25595 22527 06243 81837 02011 85393 97265 61680 16656 81647 30995 76393 07856 06121 91646 89198 64809 16376 91782 69179 27982 15179 39440 60468 14194 53402 24830 53537 81305 62590 93965 49340 71341 49684 36207 34095 32081 57004 60672 20969 52666 30680 00849 14110 99570 19174 19655 74917 06927 91291 39615 63348 97758 01263 90700 99505 58629 16379 54613 77921 99562 96301 89579 85475 06907 72905 91977 14342 36857 11008 56420 05463 63661 53342 42751 69994 07972 10281 53988 27756 98872 18876 17453 53060 53498 31016 20922 18103 59533 18602 71194 94595 57740 38867 70659 18738 56869 84378 62300 90655 44013 69014 25331 08158 15053 48840 60045 12566 17983 21916 63213 18425 58678 16439 81825 21069 84903 44947 11458 44394 10634 42508 05585 18593 42880 12952 32307 56941 64952 28918 63553 09429 10365 07119 69578 40961 93969 61129 97336 88231 48235 52636 87529 71048 33276 03427 92737 85689 08178 70997 49626 88974 48237 77233 79936 69445 33488 52267 13916 56865 18663 36320 67689 47564 05859 72695 17617 93394 81056 90106 52180 30015 01511 97735 31595 20847 08272 26358 85977 01547 12234 84115 85104 29372 85590 90511 27156 20285 74461 91610 33703 30613 29975 28551 78188 90322 74952 89868 90707 51085 02368 01011 52162 07056 12765 21382 54092 53916 97628 51821 52404 33362 46369 33787 51259 60268 94904 58586 09998 77452 89368 31273 23216 42698 16308 19885 04146 14513 06691 60756 55322 18594 83149 76988 92144 44819 29852 98736 13602 49442 01188 71585 23495 51851 53900 65255 85030 64350 46104 70960 64835 51132 94738 88916 63990 44919 01915 17752 19509 75601 05944 92747 35156 25625 40719 55157 64951 35749 58104 48663 54164 32639 29334 02488 91245 58492 32363 27001 33062 85828 22421 05597 87637 28834 14346 74103 24200 87308 07351 09172 47070 13363 58731 19731 30168 25306 38005 00256 92420 90229 76468 94342 45834 60952 04734 26384 28728 15398 61280 59193 58151 35806 46557 50001 22178 06646 06912 41135 67658 30421 21524 17012 10367 32586 61666 15227 64161 07684 86679 99904 96909 18296 36188 50720 32812 44592 22851 18510 94953 81525 29676 00742 05366 91921 72295 20591 57392 04213 26418 04839 68086 39064 25669 64117 96423 26432 66432 26422 94305 24878 46901 84673 44407 26766 82651 20849 40027 44048 25940 66566 89768 32832 37397 39972 14778 81536 61362 63904 22209 76797 86645 98947 45766 71500 14780 12659 96067 66134 64568 13300 92259 64760 75470 91402 87074 57102 64584 66520 42416 79666 80428 96096 34693 07844 95725 25280 98253 90449 69618 00582 00725 69011 25976 09763 04711 69884 65795 57948 83473 87917 62797 95876 29888 73577 77341 56170 55293 88604 12908 42206 86324 18988 67917 30883 35126 88072 27354 48708 18317 74087 76222 26575 18912 28290 99547 36086 08625 82271 35797 81817 84637 40801 65424 05998 42607 93161 59920 69774 41688 43808 76038 29841 33611 34952 76655 65855 80150 54262 37888 62028 77919 12777 85963 38917 76630 88006 48501 03547 88050 91567 17955 46503 92157 14577 42595 56349 18584 89634 62765 27958 90999 18845 94824 35605 30134 49127 49618 78171 81263 04024 20044 02304 84610 39667 86385 59931 51038 82834 47358 29880 06115 20655 09922 56873 99730 20542 58727 25417 56307 55536 18059 28168 44137 61607 84855 02008 15475 48413 49518 29080 73708 56942 25555 89656 09250 83517 53389 21246 20103 79656 36103 20562 35509 77490 73211 42791 87338 20468 18062 98427 34914 70060 53976 76072 07523 63976 28277 54914 29515 33362 88720 39475 06990 40980 64270 82765 46473 67245 07391 01638 34476 23219 68350 58745 92477 17032 53416 82948 25774 66969 87589 94970 11398 22987 98420 40836 25832 42878 80059 04880 32427 69975 80287 39911 45585 70002 94884 88267 96189 46565 70663 19661 47363 41151 04102 88863 72828 46634 14222 46880 77775 00102 06541 60697 45709 69348 66794 97809 59583 90725 64364 08962 95012 15664 52210 67412 00358 68379 10493 83974 33339 31662 93526 20492 29992 31926 25388 70765 38391 65831 14883 61642 10592 91132 38857 24413 34072 04542 21999 50490 59744 81249 76463 59516 83765 92351 35648 54328 81652 55657 97473 56891 02349 27195 14361 89286 69352 17247 48223 31720 35931 48373 28865 46751 57375 04110 45578 14777 22923 56228 23726 78547 62730 32261 41546 51900 81788 92277 85653 16408 18629 73115 57491 30405 81899 81953 35101 16703 83946 04153 05520 47498 23167 23792 53381 91962 87637 49323 14422 79401 04739 99016 45021 15059 21438 13092 71060 33132 45799 83035 97662 88824 12544 22716 92350 24822 71013 41035 19792 36693 94730 18735 80780 09983 31238 06496 20286 45393 74353 59649 35090 23153 44812 68668 91754 04822 72924 12515 30429 72772 86774 35165 98931 70735 02338 98289 43040 91202 25499 16631 96773 38935 31624 78919 35006 20206 64202 76384 19474 85900 42559 14349 17403 23632 98275 78985 82674 53363 27889 32388 05300 66523 44167 47914 52390 22164 44133 64486 02584 16815 24369 00697 64758 37680 69298 54224 35552 75366 20801 82732 35083 35970 76554 72152 38480 19687 19124 31601 39339 73817 11052 63318 12614 34806 32523 91491 29686 33072 08930 41961 60383 03387 60332 85001 44437 19746 59846 92325 87820 506 03931 74426 09066 42238 16153 33309 33278 00903 12426 08002 57047 43972 20795 87025 26504 74211 10119 95452 14267 41744 63445 89917 92648 20979 81959 17361 15665 45454 04508 65642 62825 52872 09552 64535 74240 39908 73823 88815 31355 56302 05607 73144 16553 86064 00033 91284 88662 51125 29472 67107 68833 88970 79375 47689 77510 25570 74492 97596 05974 70625 38818 51805 16296 52468 28725 46920 99378 66092 16834 34191 21457 21581 55612 44657 91340 40742 57802 78095 66999 84979 29820 02050 83197 99324 46949 96783 89728 33732 51281 81973 29400 17937 05810 84463 37949 21840 37621 24813 60563 61023 15035 47075 86902 79312 43997 34537 42080 60397 93454 15263 33310 97403 16489 68876 80644 06116 48626 03264 25471 43942 95240 68995 88525 93911 89203 15957 43805 42786 25650 71795 16572 33386 05269 12682 99533 06004 21597 92532 73572 50501 91227 50001 65390 27504 37169 21199 38140 05224 96131 94851 31935 66321 72958 83944 39117 27022 19924 28609 41575 89632 84067 72163 81406 10573 00959 05462 09538 39147 08619 16487 35216 12151 25549 64482 65536 14486 06878 48542 73923 49071 29891 91903 42627 36152 39782 68607 18749 45233 05184 17095 41867 34405 57202 94142 02330 14951 56087 94617 25299 74301 91696 82790 23772 84387 00275 85065 70925 07896 34925 48280 11508 37449 46515 30986 63798 70225 30362 70331 81223 64995 51111 06694 85922 42416 46583 38351 54690 38329 58353 09785 19444 04052 57015 21532 44160 66499 53115 15765 30502 78128 71945 62757 97161 32305 83991 05422 95348 17869 86482 42865 13442 78662 45349 05174 92520 78675 11163 61796 07901 83531 84081 81651 66345 54339 80377 66938 50245 81073 58861 35909 93654 34971 49106 74818 81250 59894 52924 79860 46942 54238 82486 21885 60336 43937 97656 84846 32906 98782 46891 63175 99254 92431 07408 24010 89303 67632 09060 53458 25560 16275 43218 64297 13564 86355 07100 50076 51674 59089 33941 92063 21361 64126 26445 25786 21942 64816 62570 29789 54990 18611 51202 26123 85205 71899 47348 88124 05155 41001 15475 20203 41870 59194 12535 95434 18534 52689 52799 12133 98227 03862 51275 28225 14645 21824 78095 83556 85762 23541 19585 50136 03299 79626 85636 18039 08362 01221 06486 68335 14367 15656 05418 03574 47539 61337 60627 38982 17668 03129 06177 36478 55758 07785 65651 12143 65648 92237 76020 11977 46609 16764 26759 79924 02510 32989 53412 86367 25651 26113 74014 09013 21216 83325 99447 64708 07832 98442 88428 68645 00533 41574 08303 85076 34327 35398 17639 56613 72811 15152 58408 82163 91511 22717 55230 13261 60859 75928 50585 93448 47908 75567 79556 92608 23982 09915 59037 29068 82674 25835 96306 33300 04142 27072 40055 05908 26695 16268 32534 67006 97901 62247 15387 17075 12293 28395 69927 12856 27698 02753 14186 76123 66227 98204 14827 00821 50842 38358 63863 23235 80703 43834 22478 11951 35071 70426 86654 73373 34648 99704 75647 70959 88732 88022 37543 76310 79725 09443 56148 11601 88717 93872 82558 34925 35503 37890 28117 05250 57031 85171 40129 19233 42488 46764 03237 86591 38534 78077 86273 45430 81482 01715 69882 63003 55417 52667 94964 61657 93017 63282 61582 87288 34136 31204 90816 14972 65680 79180 36692 17349 90053 43772 97526 40202 88298 89534 39560 43092 35275 90183 76036 12918 04098 57306 36600 49199 86537 73571 55543 78406 43716 62738 80799 53203 06216 97548 19636 76536 18098 95787 04379 51132 71255 47625 42579 46370 25739 64239 88684 90730 28672 56947 Abridged from Handbook of Tables for Probability and Statistics, 2nd ed., edited by William H. Beyer (Cleveland: The Chemical Rubber Company, 1968). Used by permission of The Chemical Rubber Company. APPENDIX C Distribution of Chi Square Text not available due to copyright restrictions 508 Text not available due to copyright restrictions APPENDIX D Normal Curve Areas Text not available due to copyright restrictions 510 APPENDIX E Estimated Sampling Error How to use this table: Find the intersection between the sample size and the approximate percentage distribution of the binomial in the sample. The number appearing at this intersection represents the estimated sampling error, at the 95 percent confidence level, expressed in percentage points (plus or minus). Example: In the sample of 400 respondents, 60-percent answer yes and 40 percent answer no. The sampling error is estimated at plus or minus 4.9 percentage points. The confidence interval, then, is between 55.1 percent and 64.9 percent. We would estimate (95 percent confidence) that the proportion of the total population who would say yes is somewhere within that interval. Binomial Percentage Distribution Sample Size 50/50 60/40 70/30 80/20 90/10 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 10.0 7.1 5.8 5.0 4.5 4.1 3.8 3.5 3.3 3.2 3.0 2.9 2.8 2.7 2.6 2.5 2.4 2.4 2.3 2.2 9.8 6.9 5.7 4.9 4.4 4.0 3.7 3.5 3.3 3.1 3.0 2.8 2.7 2.6 2.5 2.4 2.4 2.3 2.2 2.2 9.2 6.5 5.3 4.6 4.1 3.7 3.5 3.2 3.1 2.9 2.8 2.6 2.5 2.4 2.4 2.3 2.2 2.2 2.1 2.0 8.0 5.7 4.6 4.0 3.6 3.3 3.0 2.8 2.7 2.5 2.4 2.3 2.2 2.1 2.1 2.0 1.9 1.9 1.8 1.8 6.0 4.2 3.5 3.0 2.7 2.4 2.3 2.1 2.0 1.9 1.8 1.7 1.7 1.6 1.5 1.5 1.5 1.4 1.4 1.3 511 This page intentionally left blank GLOSSARY abstract A summary of a research article. The abstract usually begins the article and states the purpose of the research, the methods used, and the major findings. See Chapter 15. anonymity Anonymity is guaranteed in a research project when neither the researchers nor the readers of the findings can identify a given response with a given respondent. See Chapter 3. attribute A characteristic of a person or a thing. See variable and Chapter 1. average An ambiguous term generally suggesting typical or normal—a central tendency. The mean, median, and mode are specific examples of mathematical averages. See Chapter 14. axial coding A reanalysis of the results of open coding in Grounded Theory Method, aimed at identifying the important, general concepts. See Chapter 13. bias (1) That quality of a measurement device that tends to result in a misrepresentation, in a particular direction, of what is being measured. For example, the questionnaire item “Don’t you agree that the president is doing a good job?” would be biased in that it would generally encourage favorable responses. See Chapter 9. (2) The thing inside you that makes other people or groups seem consistently better or worse than they really are. (3) What a nail looks like after you hit it crooked. (If you drink, don’t drive.) bivariate analysis The analysis of two variables simultaneously, for the purpose of determining the empirical relationship between them. The construction of a simple percentage table or the computation of a simple correlation coefficient are examples of bivariate analyses. See Chapter 14. Bogardus social distance scale A measurement technique for determining the willingness of people to participate in social relations—of varying degrees of closeness—with other kinds of people. It is an especially efficient technique in that one can summarize several discrete answers without losing any of the original details of the data. See Chapter 6. case study The in-depth examination of a single instance of some social phenomenon, such as a village, a family, or a juvenile gang. See Chapter 10. case-oriented analysis (1) An analysis that aims to understand a particular case or several cases by looking closely at the details of each. See Chapter 13. (2) A private investigator’s billing system. closed-ended questions Survey questions in which the respondent is asked to select an answer from among a list provided by the researcher. These are popular in survey research because they provide a greater uniformity of responses and are more easily processed than open-ended questions. See Chapter 9. cluster sampling (1) A multistage sampling in which natural groups (clusters) are sampled initially, with the members of each selected group being subsampled afterward. For example, you might select a sample of U.S. colleges and universities from a directory, get lists of the students at all the selected schools, then draw samples of students from each. This procedure is discussed in Chapter 7. (2) Pawing 513 514 GLOSSARY around in a box of macadamia nut clusters to take all the big ones for yourself. codebook (1) The document used in data processing and analysis that tells the location of different data items in a data file. Typically, the codebook identifies the locations of data items and the meaning of the codes used to represent different attributes of variables. See Chapter 14. (2) The document that cost you 38 box tops just to learn that Captain Marvelous wanted you to brush your teeth and always tell the truth. (3) The document that allows CIA agents to learn that Captain Marvelous wants them to brush their teeth. coding (1) The process whereby raw data are transformed into standardized form suitable for machine processing and analysis. See Chapter 11. (2) A strong drug you may take when you hab a bad code. cohort study A study in which some specific subpopulation, or cohort, is studied over time, although data may be collected from different members in each set of observations. A study of the occupational history of the class of 1970, in which questionnaires were sent every five years, for example, would be a cohort study. See Chapter 4 for more on this topic (if you want more). See also longitudinal study, panel study, and trend study. comparative and historical research The examination of societies (or other social units) over time and in comparison with one another. See Chapter 11. concept mapping (1) The graphical display of concepts and their interrelations, useful in the formulation of theory. See Chapter 13. (2) A masculine technique for finding locations by logic and will, without asking for directions. conceptualization (1) The mental process whereby fuzzy and imprecise notions (concepts) are made more specific and precise. So you want to study prejudice. What do you mean by prejudice? Are there different kinds of prejudice? What are they? See Chapter 5, which is all about conceptualization and its pal, operationalization. (2) Sexual reproduction among intellectuals. confidence interval (1) The range of values within which a population parameter is estimated to lie. A survey, for example, may show 40 percent of a sample favoring Candidate A (poor devil). Although the best estimate of the support existing among all voters would also be 40 percent, we would not expect it to be exactly that. We might, therefore, compute a confidence interval (such as from 35 to 45 percent) within which the actual percentage of the population probably lies. Note that we must specify a confidence level in connection with every confidence interval. See Chapter 7. (2) How close you dare to get to an alligator. confidence level (1) The estimated probability that a population parameter lies within a given confidence interval. Thus, we might be 95 percent confident that between 35 and 45 percent of all voters favor Candidate A. See Chapter 7. (2) How sure you are that the ring you bought from a street vendor for $10 is really a threecarat diamond. confidentiality A research project guarantees confidentiality when the researcher can identify a given person’s responses but promises not to do so publicly. See Chapter 3. constant comparative method (1) A component of the Grounded Theory Method in which observations are compared with one another and with the evolving inductive theory. See Chapter 13. (2) A blind-dating technique. construct validity The degree to which a measure relates to other variables as expected within a system of theoretical relationships. See Chapter 5. content analysis The study of recorded human communications, such as books, websites, paintings, and laws. See Chapter 11. content validity The degree to which a measure covers the range of meanings included within a concept. See Chapter 5. contingency question A survey question intended for only some respondents, determined GLOSSARY by their responses to some other question. For example, all respondents might be asked whether they belong to the Cosa Nostra, and only those who said yes would be asked how often they go to company meetings and picnics. The latter would be a contingency question. See Chapter 9. contingency table (1) A format for presenting the relationships among variables as percentage distributions. See Chapter 14 for several illustrations and guides to making such tables. (2) The card table you keep around in case your guests bring their seven kids with them to dinner. continuous variable A variable whose attributes form a steady progression, such as age or income. Thus, the ages of a group of people might include 21, 22, 23, 24, and so forth and could even be broken down into fractions of years. Contrast this with discrete variables, such as sex or religious affiliation, whose attributes form discontinuous chunks. See Chapter 14. control group (1) In experimentation, a group of subjects to whom no experimental stimulus is administered and who resemble the experimental group in all other respects. The comparison of the control group and the experimental group at the end of the experiment points to the effect of the experimental stimulus. See Chapter 8. (2) American Association of Managers. control variable See test variable. conversation analysis (CA) A meticulous analysis of the details of conversation, based on a complete transcript that includes pauses, hems, and also haws. See Chapter 13. correlation (1) An empirical relationship between two variables such that (a) changes in one are associated with changes in the other or (b) particular attributes of one variable are associated with particular attributes of the other. Thus, for example, we say that education and income are correlated in that higher levels of education are associated with higher levels of income. Correlation in and of itself does not 515 constitute a causal relationship between the two variables, but it is one criterion of causality. See Chapter 4. (2) Someone you and your friend are both related to. criterion-related validity The degree to which a measure relates to some external criterion. For example, the validity of the College Board exams is shown in their ability to predict the college success of students. Also called predictive validity. See Chapter 5. cross-case analysis An analysis that involves an examination of more than one case, either a variable-oriented or case-oriented analysis. See Chapter 13. cross-sectional study A study based on observations representing a single point in time. Contrasted with a longitudinal study. See Chapter 4. debriefing (1) Interviewing subjects to learn about their experience of participation in the project and to inform them of any unrevealed purpose. This is especially important if there’s a possibility that they have been damaged by that participation. See Chapter 3. (2) Pulling someone’s shorts down. Don’t do that. It’s not nice. deduction (1) The logical model in which specific expectations of hypotheses are developed on the basis of general principles. Starting from the general principle that all deans are meanies, you might anticipate that this one won’t let you change courses. This anticipation would be the result of deduction. See also induction and Chapter 1. (2) What the Internal Revenue Service said your good-for-nothing moocher of a brother-in-law technically isn’t. (3) Of a duck. dependent variable (1) A variable assumed to depend on or be caused by another (called the independent variable). If you find that income is partly a function of amount of formal education, income is being treated as a dependent variable. See Chapter 1. (2) A wimpy variable. dimension A specifiable aspect of a concept. “Religiosity,” for example, might be specified in terms of a belief dimension, a ritual dimension, 516 GLOSSARY a devotional dimension, a knowledge dimension, and so forth. See Chapter 5. discrete variable A variable whose attributes are separate from one another, or discontinuous, as in the case of sex or religious affiliation. Contrast this with continuous variables, in which one attribute shades off into the next. Thus, in age (a continuous variable), the attributes progress steadily from 21 to 22 to 23, and so forth, whereas there is no progression from male to female in the case of sex. See Chapter 14. dispersion The distribution of values around some central value, such as an average. The range is a simple example of a measure of dispersion. Thus, we may report that the mean age of a group is 37.9, and the range is from 12 to 89. See Chapter 14. double-blind experiment An experimental design in which neither the subjects nor the experimenters know which is the experimental group and which is the control. See Chapter 8. ecological fallacy Erroneously basing conclusions about individuals solely on the observation of groups. See Chapter 4. element That unit of which a population is composed and which is selected in a sample. Distinguished from units of analysis, which are used in data analysis. See Chapter 7. emancipatory research Research conducted for the purpose of benefiting disadvantaged groups. See Chapter 10. EPSEM (equal probability of selection method) A sample design in which each member of a population has the same chance of being selected into the sample. See Chapter 7. ethnography A report on social life that focuses on detailed and accurate description rather than explanation. See Chapter 10. ethnomethodology An approach to the study of social life that focuses on the discovery of implicit, usually unspoken assumptions and agreements; this method often involves the intentional breaking of agreements as a way of revealing their existence. See Chapter 10. evaluation research Research undertaken for the purpose of determining the impact of some social intervention, such as a program aimed at solving a social problem. See Chapter 12. experimental group In experimentation, a group of subjects to whom an experimental stimulus is administered. Compare with control group. See Chapter 8. extended case method A technique developed by Michael Burawoy in which case study observations are used to discover flaws in and to improve existing social theories. See Chapter 10. external invalidity Refers to the possibility that conclusions drawn from experimental results may not be generalizable to the “real” world. See Chapter 8 and also internal invalidity. external validation The process of testing the validity of a measure, such as an index or scale, by examining its relationship to other, presumed indicators of the same variable. If the index really measures prejudice, for example, it should correlate with other indicators of prejudice. See Chapter 6 for a fuller discussion and illustrations. face validity (1) That quality of an indicator that makes it seem a reasonable measure of some variable. That the frequency of attendance at religious services is some indication of a person’s religiosity seems to make sense without a lot of explanation. It has face validity. See Chapter 5. (2) When your face looks like your driver’s license photo (rare and perhaps unfortunate). field experiment A formal experiment conducted outside the laboratory, in a natural setting. See Chapter 8. focus group A group of subjects interviewed together, prompting a discussion. The technique is frequently used by market researchers, who ask a group of consumers to evaluate a product or discuss a type of commodity, for example. See Chapter 10. frequency distribution (1) A description of the number of times the various attributes of a variable are observed in a sample. The GLOSSARY report that 53 percent of a sample were men and 47 percent were women would be a simple example of a frequency distribution. Another example would be the report that 15 of the cities studied had populations under 10,000, 23 had populations between 10,000 and 25,000, and so forth. See Chapter 14. (2) A radio dial. grounded theory (1) An inductive approach to the study of social life that attempts to generate a theory from the constant comparing of unfolding observations. This differs greatly from hypothesis testing, in which theory is used to generate hypotheses to be tested through observations. See Chapter 10. (2) A theory that is not allowed to fly. Grounded Theory Method (GTM) An inductive approach to research introduced by Barney Glaser and Anselm Strauss in which theories are generated solely from an examination of data rather than being derived deductively. See Chapter 13. Guttman scale (1) A type of composite measure used to summarize several discrete observations and to represent some more-general variable. See Chapter 6. (2) The device Louis Guttman weighed himself on. hypothesis A specified testable expectation about empirical reality that follows from a more general proposition; more generally, an expectation about the nature of things derived from a theory. It is a statement of something that ought to be observed in the real world if the theory is correct. See Chapter 2. idiographic An approach to explanation in which we seek to exhaust the idiosyncratic causes of a particular condition or event. Imagine trying to list all the reasons why you chose to attend your particular college. Given all those reasons, it’s difficult to imagine your making any other choice. Contrasted with nomothetic. See Chapter 1. independent variable (1) A variable with values that are not problematical in an analysis but are taken as simply given. An independent variable is presumed to cause or determine 517 a dependent variable. If we discover that religiosity is partly a function of sex—women are more religious than are men—sex is the independent variable and religiosity is the dependent variable. Note that any given variable might be treated as independent in one part of an analysis and dependent in another part of it. Religiosity might become an independent variable in an explanation of crime. See Chapter 1. (2) A variable that refuses to take advice. index A type of composite measure that summarizes and rank-orders several specific observations and represents some more general dimension. Contrasted with scale. See Chapter 6. indicator An observation that we choose to consider as a reflection of a variable we wish to study. Thus, for example, attending religious services might be considered an indicator of religiosity. See Chapter 5. induction (1) The logical model in which general principles are developed from specific observations. Having noted that Jews and Catholics are more likely to vote Democratic than are Protestants, you might conclude that religious minorities in the United States are more affiliated with the Democratic party, and then your task is to explain why. This would be an example of induction. See also deduction and Chapter 1. (2) The culinary art of stuffing ducks. informant Someone well versed in the social phenomenon that you wish to study and who is willing to tell you what he or she knows about it. If you were planning participant observation among the members of a religious sect, you would do well to make friends with someone who already knows about them— possibly a member of the sect—who could give you some background information about them. Not to be confused with a respondent. See Chapter 7. informed consent A norm in which subjects base their voluntary participation in research projects on a full understanding of the possible risks involved. See Chapter 3. 518 GLOSSARY institutional ethnography A research technique in which the personal experiences of individuals are used to reveal power relationships and other characteristics of the institutions within which they operate. See Chapter 10. interest convergence The thesis that majority group members will only support the interests of minorities when those actions also support the interests of the majority group. See Chapter 2. internal invalidity Refers to the possibility that the conclusions drawn from experimental results may not accurately reflect what went on in the experiment itself. See also external invalidity and Chapter 8. interval measure A level of measurement describing a variable whose attributes are rankordered and have equal distances between adjacent attributes. The Fahrenheit temperature scale is an example of this, because the distance between 17 and 18 is the same as that between 89 and 90. See Chapter 5 and nominal measure, ordinal measure, and ratio measure. interview A data-collection encounter in which one person (an interviewer) asks questions of another (a respondent). Interviews may be conducted face-to-face or by telephone. See Chapter 9 for more information on interviewing as a method of survey research. item analysis An assessment of whether each of the items included in a composite measure makes an independent contribution or merely duplicates the contribution of other items in the measure. See Chapter 6. judgmental sampling (1) See purposive sampling and Chapter 7. (2) A sampling of opinionated people. latent content (1) In connection with content analysis, the underlying meaning of communications as distinguished from their manifest content. See Chapter 11. (2) What you need to make a latent. Likert scale A type of composite measure developed by Rensis Likert in an attempt to improve the levels of measurement in social research through the use of standardized response categories in survey questionnaires to determine the relative intensity of different items. Likert items are those using such response categories as “strongly agree,” “agree,” “disagree,” and “strongly disagree.” Such items may be used in the construction of true Likert scales as well as other types of composite measures. See Chapter 6. longitudinal study A study design involving data collected at different points in time, as contrasted with a cross-sectional study. See also Chapter 4 and cohort study, panel study, and trend study. macrotheory A theory aimed at understanding the “big picture” of institutions, whole societies, and the interactions among societies. Karl Marx’s examination of the class struggle is an example of macrotheory. Contrasted with microtheory. See Chapter 2. manifest content (1) In connection with content analysis, the concrete terms contained in a communication, as distinguished from latent content. See Chapter 11. (2) What you have after a manifest bursts. matching In connection with experiments, the procedure whereby pairs of subjects are matched on the basis of their similarities on one or more variables, and one member of the pair is assigned to the experimental group and the other to the control group. See Chapter 8. mean (1) An average computed by summing the values of several observations and dividing by the number of observations. If you now have a grade point average of 4.0 based on 10 courses, and you get an F in this course, your new grade point (mean) average will be 3.6. See Chapter 14. (2) The quality of the thoughts you might have if your instructor did that to you. median An average representing the value of the “middle” case in a rank-ordered set of observations. If the ages of five men are 16, 17, 20, 54, and 88, the median would be 20. (The mean would be 39.) See Chapter 14. (2) The GLOSSARY dividing line between safe driving and exciting driving. memoing Writing memos that become part of the data for analysis in qualitative research such as grounded theory. Memos can describe and define concepts, deal with methodological issues, or offer initial theoretical formulations. See Chapter 13. microtheory A theory aimed at understanding social life at the level of individuals and their interactions. Explaining how the play behavior of girls differs from that of boys is an example of microtheory. Contrasted with macrotheory. See Chapter 2. mode (1) An average representing the most frequently observed value or attribute. If a sample contains 1,000 Protestants, 275 Catholics, and 33 Jews, “Protestant” is the modal category. See Chapter 14 for more thrilling disclosures about averages. (2) Better than apple pie à la median. multiple time-series designs The use of more than one set of data that were collected over time, as in accident rates over time in several states or cities, so that comparison can be made. See Chapter 12. multivariate analysis The analysis of the simultaneous relationships among several variables. Examining simultaneously the effects of age, sex, and social class on religiosity would be an example of multivariate analysis. See Chapter 14. naturalism An approach to field research based on the assumption that an objective social reality exists and can be observed and reported accurately. See Chapter 10. nominal measure A variable whose attributes have only the characteristics of exhaustiveness and mutual exclusiveness. In other words, a level of measurement describing a variable that has attributes that are merely different, as distinguished from ordinal, interval, or ratio measures. Sex is an example of a nominal measure. See Chapter 5. nomothetic An approach to explanation in which we seek to identify a few causal factors 519 that generally impact a class of conditions or events. Imagine the two or three key factors that determine which colleges students choose, such as proximity, reputation, and so forth. Contrasted with idiographic. See also Chapter 1. nonequivalent control group A control group that is similar to the experimental group but is not created by the random assignment of subjects. This sort of control group does differ significantly from the experimental group in terms of the dependent variable or variables related to it. See Chapter 12. nonprobability sampling Any technique in which samples are selected in some way not suggested by probability theory. Examples include reliance on available subjects as well as purposive (judgmental), snowball, and quota sampling. See Chapter 7. null hypothesis (1) In connection with hypothesis testing and tests of statistical significance, that hypothesis that suggests there is no relationship among the variables under study. You may conclude that the variables are related after having statistically rejected the null hypothesis. See Chapter 2. (2) An expectation about nulls. open coding The initial classification and labeling of concepts in qualitative data analysis. In open coding, the codes are suggested by the researchers’ examination and questioning of the data. See Chapter 13. open-ended questions Questions for which the respondent is asked to provide his or her own answers. In-depth, qualitative interviewing relies almost exclusively on open-ended questions. See Chapter 9. operational definition The concrete and specific definition of something in terms of the operations by which observations are to be categorized. The operational definition of “earning an A in this course” might be “correctly answering at least 90 percent of the final exam questions.” See Chapter 2. operationalization (1) One step beyond conceptualization. Operationalization is the 520 GLOSSARY process of developing operational definitions, or specifying the exact operations involved in measuring a variable. See Chapter 2. (2) Surgery on intellectuals. ordinal measure A level of measurement describing a variable with attributes we can rank-order along some dimension. An example is socioeconomic status as composed of the attributes high, medium, low. See Chapter 5 and nominal measure, interval measure, and ratio measure. panel study A type of longitudinal study, in which data are collected from the same set of people (the sample or panel) at several points in time. See Chapter 4 and cohort, longitudinal, and trend study. paradigm (1) A model or framework for observation and understanding, which shapes both what we see and how we understand it. The conflict paradigm causes us to see social behavior one way, the interactionist paradigm causes us to see it differently. See Chapter 2. (2) $0.20. parameter The summary description of a given variable in a population. See Chapter 7. participatory action research An approach to social research in which the people being studied are given control over the purpose and procedures of the research; intended as a counter to the implicit view that researchers are superior to those they study. See Chapter 10. plagiarism Presenting someone else’s words or thoughts as though they were your own, constituting intellectual theft. See Chapter 15. population The theoretically specified aggregation of the elements in a study. See Chapter 7. posttesting The remeasurement of a dependent variable among subjects after they’ve been exposed to a stimulus representing an independent variable. See pretesting and Chapter 8. PPS (probability proportionate to size) (1) This refers to a type of multistage cluster sample in which clusters are selected, not with equal probabilities (see EPSEM) but with probabilities proportionate to their sizes—as measured by the number of units to be subsampled. See Chapter 7. (2) The odds on who gets to go first: you or the 275-pound fullback. pretesting The measurement of a dependent variable among subjects before they are exposed to a stimulus representing an independent variable. See posttesting and Chapter 8. probability sampling The general term for samples selected in accord with probability theory, typically involving some randomselection mechanism. Specific types of probability sampling include EPSEM, PPS, simple random sampling, and systematic sampling. See Chapter 7. probe A technique employed in interviewing to solicit a more complete answer to a question. It is a nondirective phrase or question used to encourage a respondent to elaborate on an answer. Examples include “Anything more?” and “How is that?” See Chapter 9 for a discussion of interviewing. program evaluation/outcome assessment The determination of whether a social intervention is producing the intended result. See Chapter 12. purposive sampling A type of nonprobability sampling in which the units to be observed are selected on the basis of the researcher’s judgment about which ones will be the most useful or representative. Also called judgmental sampling. See Chapter 7. qualitative analysis (1) The nonnumerical examination and interpretation of observations, for the purpose of discovering underlying meanings and patterns of relationships. This approach is most typical of field research and historical research. See Chapter 13. (2) A classy analysis. qualitative interview Contrasted with survey interviewing, the qualitative interview is based on a set of topics to be discussed in depth rather than the use of standardized questions. See Chapter 10. quantitative analysis (1) The numerical representation and manipulation of observations for the purpose of describing and explaining GLOSSARY the phenomena that those observations reflect. See Chapter 14. (2) A BIG analysis. quasi experiments Nonrigorous inquiries somewhat resembling controlled experiments but lacking key elements such as pre- and posttesting and/or control groups. See Chapter 12. questionnaire A document containing questions and other types of items designed to solicit information appropriate for analysis. Questionnaires are used primarily in survey research but also in experiments, field research, and other modes of observation. See Chapter 9. quota sampling A type of nonprobability sampling in which units are selected into a sample on the basis of prespecified characteristics, so that the total sample will have the same distribution of characteristics assumed to exist in the population being studied. See Chapter 7. random selection A sampling method in which each element has an equal chance of selection independent of any other event in the selection process. See Chapter 7. randomization A technique for assigning experimental subjects to experimental and control groups randomly. See Chapter 8. ratio measure A level of measurement describing a variable with attributes that have all the qualities of nominal, ordinal, and interval measures and in addition are based on a “true zero” point. Age is an example of a ratio measure. See Chapter 5 and interval measure, nominal measure, and ordinal measure. reactivity The problem that the subjects of social research may react to the fact of being studied, thus altering their behavior from what it would have been normally. See Chapter 10. reductionism (1) A fault of some researchers: a strict limitation (reduction) of the kinds of concepts to be considered relevant to the phenomenon under study. See Chapter 4. (2) The cloning of ducks. reliability (1) That quality of measurement method that suggests that the same data would have been collected each time in re- 521 peated observations of the same phenomenon. In the context of a survey, we would expect that the question “Did you attend religious services last week?” would have higher reliability than the question “About how many times have you attended religious services in your life?” This is not to be confused with validity. See Chapter 5. (2) Quality of repeatability in untruths. replication The duplication of an experiment to expose or reduce error. See Chapter 1. representativeness (1) That quality of a sample of having the same distribution of characteristics as the population from which it was selected. By implication, descriptions and explanations derived from an analysis of the sample may be assumed to represent similar ones in the population. Representativeness is enhanced by probability sampling and provides for generalizability and the use of inferential statistics. See Chapter 7. (2) A noticeable quality in the presentation-of-self of some members of the U.S. Congress. research monograph A book-length research report, either published or unpublished. This is distinguished from a textbook, a book of essays, a novel, and so forth. See Chapter 15. respondent A person who provides data for analysis by responding to a survey questionnaire. See Chapter 9. response rate The number of people participating in a survey divided by the number selected in the sample, in the form of a percentage. This is also called the completion rate or, in self-administered surveys, the return rate: the percentage of questionnaires sent out that are returned. See Chapter 9. sampling error The degree of error to be expected in probability sampling. The formula for determining sampling error contains three factors: the parameter, the sample size, and the standard error. See Chapter 7. sampling frame That list or quasi list of units composing a population from which a sample is selected. If the sample is to be representative of the population, it is essential that the sam- 522 GLOSSARY pling frame include all (or nearly all) members of the population. See Chapter 7. sampling interval The standard distance (k) between elements selected from a population for a sample. See Chapter 7. sampling ratio The proportion of elements in the population that are selected to be in a sample. See Chapter 7. sampling unit That element or set of elements considered for selection in some stage of sampling. See Chapter 7. scale (1) A type of composite measure composed of several items that have a logical or empirical structure among them. Examples of scales include Bogardus social distance, Guttman, Likert, and Thurstone scales. Contrasted with index. See also Chapter 6. (2) One of the less appetizing parts of a fish. search engine A computer program designed to locate where specified terms appear on websites throughout the World Wide Web. See Chapter 15. secondary analysis (1) A form of research in which the data collected and processed by one researcher are reanalyzed—often for a different purpose—by another. This is especially appropriate in the case of survey data. Data archives are repositories or libraries for the storage and distribution of data for secondary analysis. See Chapter 9. (2) Estimating the weight and speed of an opposing team’s linebackers. selective coding In Grounded Theory Method, this analysis builds on the results of open coding and axial coding to identify the central concept that organizes the other concepts that have been identified in a body of textual materials. See Chapter 13. semantic differential A questionnaire format in which the respondent is asked to rate something in terms of two, opposite adjectives (e.g., rate textbooks as “boring” or “exciting”), using qualifiers such as “very,” “somewhat,” “neither,” “somewhat,” and “very” to bridge the distance between the two opposites. See Chapter 6. semiotics The study of signs and the meanings associated with them. This is commonly associated with content analysis. See Chapter 13. simple random sampling (1) A type of probability sampling in which the units composing a population are assigned numbers. A set of random numbers is then generated, and the units having those numbers are included in the sample. Although probability theory and the calculations it provides assume this basic sampling method, it’s seldom used, for practical reasons. An equivalent alternative is the systematic sample (with a random start). See Chapter 7. (2) A random sample with a low IQ. snowball sampling (1) A nonprobabilitysampling method, often employed in field research, whereby each person interviewed may be asked to suggest additional people for interviewing. See Chapter 7. (2) Picking the icy ones to throw at your methods instructor. social artifact Any product of social beings or their behavior. Can be a unit of analysis. See Chapter 4. social indicators Measurements that reflect the quality or nature of social life, such as crime rates, infant mortality rates, number of physicians per 100,000 population, and so forth. Social indicators are often monitored to determine the nature of social change in a society. See Chapter 12. spurious relationship A coincidental statistical correlation between two variables, shown to be caused by some third variable. For example, there is a positive relationship between the number of fire trucks responding to a fire and the amount of damage done: the more trucks, the more damage. The third variable is the size of the fire. They send lots of fire trucks to a large fire and a lot of damage is done because of the size of the fire. For a little fire, they just send a little fire truck, and not much damage is done because it’s a small fire. Sending more fire trucks does not cause more damage. For a given size of fire, in fact, sending more trucks would reduce the amount of damage. See Chapter 4. GLOSSARY standard deviation (1) A measure of dispersion around the mean, calculated so that approximately 68 percent of the cases will lie within plus or minus one standard deviation from the mean, 95 percent will lie within plus or minus two standard deviations, and 99.9 percent will lie within three standard deviations. Thus, for example, if the mean age in a group is 30 and the standard deviation is 10, then 68 percent have ages between 20 and 40. The smaller the standard deviation, the more tightly the values are clustered around the mean; if the standard deviation is high, the values are widely spread out. See Chapter 14. (2) Routine rule-breaking. statistic The summary description of a variable in a sample, used to estimate a population parameter. See Chapter 7. stratification The grouping of the units composing a population into homogeneous groups (or strata) before sampling. This procedure, which may be used in conjunction with simple random, systematic, or cluster sampling, improves the representativeness of a sample, at least in terms of the variables used for stratification. See Chapter 7. study population That aggregation of elements from which a sample is actually selected. See Chapter 7. systematic sampling (1) A type of probability sampling in which every kth unit in a list is selected for inclusion in the sample—for example, every 25th student in the college directory of students. You compute k by dividing the size of the population by the desired sample size; k is called the sampling interval. Within certain constraints, systematic sampling is a functional equivalent of simple random sampling and usually easier to do. Typically, the first unit is selected at random. See Chapter 7. (2) Picking every third one whether it’s icy or not. See snowball sampling (2). theory A systematic explanation for the observations that relate to a particular aspect of life: juvenile delinquency, for example, or perhaps social stratification or political revolution. See Chapter 1. 523 Thurstone scale A type of composite measure, constructed in accord with the weights assigned by “judges” to various indicators of some variables. See Chapter 6. time-series design A research design that involves measurements made over some period, such as the study of traffic accident rates before and after lowering the speed limit. See Chapter 12. trend study A type of longitudinal study in which a given characteristic of some population is monitored over time. An example would be the series of Gallup Polls showing the electorate’s preferences for political candidates over the course of a campaign, even though different samples were interviewed at each point. See Chapter 4 and cohort, longitudinal, and panel study. typology (1) The classification (typically nominal) of observations in terms of their attributes on two or more variables. The classification of newspapers as liberal-urban, liberal-rural, conservative-urban, or conservative-rural would be an example. See Chapter 6. (2) Apologizing for your neckwear. units of analysis The what or whom being studied. In social science research, the most typical units of analysis are individual people. See Chapter 4. univariate analysis The analysis of a single variable, for purposes of description. Frequency distributions, averages, and measures of dispersion are examples of univariate analysis, as distinguished from bivariate and multivariate analysis. See Chapter 14. unobtrusive research Methods of studying social behavior without affecting it. This includes content analysis, analysis of existing statistics, and comparative and historical research. See Chapter 11. URL Web address, typically beginning with “http://”; stands for “uniform resource locator” or “universal resource locator.” See Chapter 15. validity A term describing a measure that accurately reflects the concept it is intended to measure. For example, your IQ would seem a 524 GLOSSARY more valid measure of your intelligence than would the number of hours you spend in the library. Though the ultimate validity of a measure can never be proven, we may agree to its relative validity on the basis of face validity, criterion-related validity, content validity, construct validity, internal validation, and external validation. This must not be confused with reliability. See Chapter 5. variable A logical grouping of attributes. The variable sex is made up of the attributes male and female. See Chapter 1. variable-oriented analysis An analysis that describes and/or explains a particular variable. See Chapter 13. weighting Assigning different weights to cases that were selected into a sample with different probabilities of selection. In the simplest scenario, each case is given a weight equal to the inverse of its probability of selection. When all cases have the same chance of selection, no weighting is necessary. See Chapter 7. REFERENCES Abdollahyan, Hamid, and Taghi Azadarmaki. 2000. Sampling Design in a Survey Research: The Sampling Practice in Iran. Paper presented at the meeting of the American Sociological Association, Washington, DC, August 12–16. Abdulhadi, Rabab. 1998. “The Palestinian Women’s Auster, Carol J. 1985. “Manuals for Socialization: Examples from Girl Scout Handbooks 1913–1984.” Qualitative Sociology 8 (4): 359–67. Babbie, Earl. 1966. “The Third Civilization.” Review of Religious Research, Winter, pp. 101–21. 1967.“A Religious Profile of Episcopal Church- Autonomous Movement: Emergence, Dynamics, and women.” Pacific Churchman, January, Challenges.” Gender and Society 12 (6): 649–73. pp. 6–8, 12. Adams, Cecil. 2006. April 8. http://www.straightdope .com/columns/010202.html. Anderson, Eric. 2005. “Orthodox and Inclusive Masculinity: Competing Masculinities among Heterosexual Men in a Feminized Terrain.” Sociological Perspectives 48 (3): 337–55. Anderson, Walt. 1990. Reality Isn’t What It Used to Be: Theatrical Politics, Ready-to-Wear Religion, Global Myths, Primitive Chic, and Other Wonders of the Postmodern World. San Francisco: Harper & Row. Andorka, Rudolf. 1990. “The Importance and the Role of the Second Economy for the Hungarian Economy and Society.” Quarterly Journal of Budapest University of Economic Sciences 12 (2): 95–113. Aneshensel, Carol S., Rosina Becerra, Eve Fielder, and Roberleigh Schuler. 1989. “Participation of Mexican American Female Adolescents in a Lon- 1970. Science and Morality in Medicine. Berkeley: University of California Press. 1982. Social Research for Consumers. Belmont, CA: Wadsworth. 1985. You Can Make a Difference. New York: St. Martin’s Press. 2004. “Laud Humphreys and Research Ethics.” International Journal of Sociology and Social Policy 24 (3/4/5): 12–18. Bailey, William C. 1975. “Murder and Capital Punishment.” In Criminal Law in Action, edited by William J. Chambliss. New York: Wiley. Bailey, William C., and Ruth D. Peterson. 1994. “Murder, Capital Punishment, and Deterrence: A Review of the Evidence and an Examination on Police Killings.” Journal of Social Issues 50:53–74. Ball-Rokeach, Sandra J., Joel W. Grube, and Milton gitudinal Panel Survey.” Public Opinion Quarterly Rokeach. 1981. “Roots: The Next Generation—Who 53:548–62. Watched and with What Effect.” Public Opinion Quar- Asch, Solomon. 1958. “Effects of Group Pressure upon the Modification and Distortion of Judgments.” terly 45:58–68. Barker, Kriss, and Miguel Sabido, eds. 2005. Soap Operas Pp. 174–83 in Readings in Social Psychology, 3rd ed., for Social Change to Prevent HIV/AIDS: A Training edited by Eleanor E. Maccoby et al. New York: Holt, Guide for Journalists and Media Personnel. Shelburne, Rinehart & Winston. Asher, Ramona M., and Gary Alan Fine. 1991. “Fragile Ties: Sharing Research Relationships with Women Married to Alcoholics.” Pp. 196–205 in Experiencing Fieldwork: An Inside View of Qualitative Research, VT: Population Media Center. Bart, Pauline, and Patricia O’Brien. 1985. Stopping Rape: Successful Survival Strategies. New York: Pergamon. Becker, Penny Edgell. 1998. “Making Inclusive Communi- edited by William B. Shaffir and Roberta A. Stebbins. ties: Congregations and the ‘Problem’ of Race.” Social Thousand Oaks, CA: Sage. Problems 45 (4): 451–72. 525 526 REFERENCES Bednarz, Marlene. 1996. “Push Polls Statement.” Report Blair, Johnny, Shanyang Zhao, Barbara Bickart, and to the AAPORnet listserv, April 5. http://www.aapor Ralph Kuhn. 1995. Sample Design for Household .org/ethics/pushpoll.html. Telephone Surveys: A Bibliography 1949–1995. College Belenky, Mary Field, Blythe McVicker Clinchy, Nancy Rule Goldberger, and Jill Mattuck Tarule. 1986. Women’s Ways of Knowing: The Development of Self, Voice, and Mind. New York: Basic Books. Bell, Derrick A. 1980. “Brown v. Board of Education and the Interest-Convergence Dilemma.” Harvard Law Review 93:518–33. Bellah, Robert N. 1957. Tokugawa Religion. Glencoe, IL: Free Press. 1967. “Research Chronicle: Tokugawa Religion.” Pp. 164–85 in Sociologists at Work, edited by Phillip E. Hammond. Garden City, NY: Anchor Books. 1970. “Christianity and Symbolic Realism.” Journal for the Scientific Study of Religion 9:89–96. 1974. “Comment on the Limits of Symbolic Realism.” Journal for the Scientific Study of Religion 13:487–89. Benton, J. Edwin, and John L. Daly. 1991. “A Question Park: Survey Research Center, University of Maryland. Blaunstein, Albert, and Robert Zangrando, eds. 1970. Civil Rights and the Black American. New York: Washington Square Press. Blumenthal, Mark M. 2005. “Toward an Open-Source Methodology: What We Can Learn from the Blogsphere.” Public Opinion Quarterly [Special Issue] 69 (5): 655–69. Bolstein, Richard. 1991. “Comparison of the Likelihood to Vote among Preelection Poll Respondents and Nonrespondents.” Public Opinion Quarterly 55:648–50. Bottomore, T. B., and Maximilien Rubel, eds. [1843] 1956. Karl Marx: Selected Writings in Sociology and Social Philosophy. Translated by T. B. Bottomore. New York: McGraw-Hill. Brown v. Board of Education of Topeka, 347 U.S. 483 (1954). Burawoy, M., A. Burton, A. A. Ferguson, K. J. Fox, J. Gamson, N. Gartrell, L. Hurst, C. Kurzman, L. Salzinger, Order Effect in a Local Government Survey.” Public J. Schiffman, and S. Ui, eds. 1991. Ethnography Un- Opinion Quarterly 55:640–42. bound: Power and Resistance in the Modern Metropolis. Berbrier, Mitch. 1998. “‘Half the Battle’: Cultural Reso- Berkeley: University of California Press. nance, Framing Processes, and Ethnic Affectations Campbell, Donald, and Julian Stanley. 1963. Experimental in Contemporary White Separatist Rhetoric.” Social and Quasi-Experimental Designs for Research. Chicago: Problems 45 (4): 431–50. Berg, Bruce L. 1989. Qualitative Research Methods for the Social Sciences. Boston: Allyn and Bacon. Beveridge, W. I. B. 1950. The Art of Scientific Investigation. New York: Vintage Books. Bian, Yanjie. 1994. Work and Inequality in Urban China. Albany: State University of New York Press. Biddle, Stuart J. H., David Markland, David Gilbourne, Nikos L. D. Chatzisarantis, and Andrew C. Sparkes. Rand McNally. Campbell, M. L. 1998. “Institutional Ethnography and Experience as Data.” Qualitative Sociology 21 (1): 55–73. Carpini, Michael, X. Delli, and Scott Keeter. 1991. “Stability and Change in the U.S. Public’s Knowledge of Politics.” Public Opinion Quarterly 55:583–612. Carr, C. Lynn. 1998. “Tomboy Resistance and Conformity: Agency in Social Psychological Gender Theory.” Gender and Society 12 (5): 528–53. 2001. “Research Methods in Sport and Exercise Psy- Census Bureau See U.S. Bureau of the Census chology: Quantitative and Qualitative Issues.” Journal Chirot, Daniel, and Jennifer Edwards. 2003. “Making of Sports Sciences 19 (10): 777–809. Bielby, William T., and Denise Bielby. 1999. “Organizational Mediation of Project-Based Labor Markets: Talent Agencies and the Careers of Screenwriters.” American Sociological Review 64:64–85. Birchfield, R. W. 1998. The New Fowler’s Modern English Usage. 3rd ed. New York: Oxford University Press. Bishop, George, and Andrew Smith. 2001. “ResponseOrder Effects and the Early Gallup Split-Ballots.” Public Opinion Quarterly 65:479–505. Black, Donald. 1970. “Production of Crime Rates.” American Sociological Review 35 (August): 733–48. Sense of the Senseless: Understanding Genocide.” Contexts 2 (2): 12–19. Chossudovsky, Michel. 1997. The Globalization of Poverty: Impacts of IMF and World Bank Reforms. London: Zed Books. Clark, Roger, Rachel Lennon, and Leana Morris. 1993. “Of Caldecotts and Kings: Gendered Images in Recent American Children’s Books by Black and Non-Black Illustrators.” Gender and Society 7 (2): 227–45. CNN. 2006. http://www.cnn.com/2006/POLITICS/ 05/10/congress.poll/index.html. Accessed May 11. REFERENCES Coates, Rodney D. 2006. “Towards a Simple Typology of Racial Hegemony.” Societies without Borders 1:69–91. Coleman, James. 1966. Equality of Educational Opportunity. Washington, DC: U.S. Government Printing Office. Collins, G. C., and Timothy B. Blodgett. 1981. “Sexual Harassment . . . Some See It . . . Some Won’t.” Harvard Business Review, March–April, pp. 76–95. Comstock, Donald. 1980. “Dimensions of Influence in Organizations.” Pacific Sociological Review, January, pp. 67–84. Conrad, Clifton F. 1978. “A Grounded Theory of Academic Change.” Sociology of Education 51:101–12. Cook, Thomas D., and Donald T. Campbell. 1979. Quasi- Replacement in Forty-two General Social Survey Items, 1972–1989.” Public Opinion Quarterly 56:261–306. De Coster, Stacy. 2005. “Depression and Law Violation: Gendered Responses to Gendered Stresses.” Sociological Perspectives 48 (2): 155–87. Deflem, Mathieu. 2002. Policing World Society: Historical Foundations of International Police Cooperation. New York: Oxford University Press. DeFleur, Lois. 1975. “Biasing Influences on Drug Arrest Records: Implications for Deviance Research.” American Sociological Review 40:88–103. Delgado, Richard. 2002. “Explaining the Rise and Fall of African American Fortunes—Interest Convergence Experimentation: Design and Analysis Issues for Field and Civil Rights Gains.” Harvard Civil Rights–Civil Settings. Chicago: Rand McNally. Liberties Law Review 37:369–87. Cooper-Stephenson, Cynthia, and Athanasios The- Dillman, Don A. 1978. Mail and Telephone Surveys: ologides. 1981. “Nutrition in Cancer: Physicians’ The Total Design Method. New York: Wiley. Knowledge, Opinions, and Educational Needs.” 1999. Mail and Telephone Surveys: The Tailored Journal of the American Dietetic Association, May, pp. 472–76. Couper, Mick P. 2001. “Web Surveys: A Review of Issues and Approaches.” Public Opinion Quarterly 64:464–94. Craig, R. Stephen. 1992. “The Effect of Television Day Part on Gender Portrayals in Television Commercials: A Content Analysis.” Sex Roles 26 (5/6): 197–211. Crawford, Kent S., Edmund D. Thomas, and Jeffrey J. 527 Design Method. 2nd ed. New York: Wiley. Doyle, Sir Arthur Conan. [1891] 1892. “A Scandal in Bohemia.” First published in The Strand, July 1891. Reprinted in The Original Illustrated Sherlock Holmes, pp. 11–25. Secaucus, NJ: Castle. DuBois, W. E. B. 1903. The Souls of Black Folk. Chicago: McClurg. Durkheim, Emile. [1893] 1964. The Division of Labor in Fink. 1980. “Pygmalion at Sea: Improving the Society. Translated by George Simpson. New York: Work Effectiveness of Low Performers.” Journal Free Press. of Applied Behavioral Science, October–December, pp. 482–505. Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter Century.” Public Opinion Quarterly 69 (1): 87–98. [1897] 1951. Suicide. Glencoe, IL: Free Press. Eastman, Crystal. 1910. “Work-Accidents and Employers’ Liability.” The Survey, September 3, pp. 788–94. Ellison, Christopher G., and Darren E. Sherkat. 1990. “Patterns of Religious Mobility among Black Americans.” Sociological Quarterly 31 (4): 551–68. Danieli, Ardha, and Carol Woodhams. 2005. “Eman- Emerson, Robert M., Kerry O. Ferris, and Carol Brooks cipatory Research Methodology and Disability: A Gardner. 1998. “On Being Stalked.” Social Problems Critique.” International Journal of Social Research 8 (4): 281–96. Davern, Michael, Todd H. Rockwood, Randy Sherrod, and Stephen Campbell. 2003. “Prepaid Monetary Incentives and Data Quality in Face-to-Face Interviews: 45 (3): 289–314. Farquharson, Karen. 2005. “A Different Kind of Snowball: Identifying Key Policymakers.” International Journal of Social Research 8 (4): 345–53. Fausto-Sterling, Anne. 1992. “Why Do We Know So Little Data from the 1996 Survey of Income and Program about Human Sex?” Discover Archives, June. http:// Participation Incentive Experiment.” Public Opinion cas.bellarmine.edu/tietjen/Human%20Nature%20S Quarterly 67:139–47. Davis, Fred. 1973. “The Martian and the Convert: Ontological Polarities in Social Research.” Urban Life 2 (3): 333–43. Davis, James. 1992. “Changeable Weather in a Cooling Climate atop the Liberal Plateau: Conversion and %201999/why_do_we_know_so_little_about_h.htm. Festinger, L., H. W. Reicker, and S. Schachter. 1956. When Prophecy Fails. Minneapolis: University of Minnesota Press. Fielding, Nigel. 2004. “Getting the Most from Archived Qualitative Data: Epistemological, Practical and 528 REFERENCES Professional Obstacles.” International Journal of Social ing the Monopoly with Participatory Action-Research, Research Methodology 7 (1): 97–104. edited by O. Fals-Borda and M. A. Rahman. New Ford, David A. 1989. “Preventing and Provoking Wife Battery through Criminal Sanctioning: A Look at the Risks.” September, unpublished manuscript. Ford, David A., and Mary Jean Regoli. 1992. “The Preventive Impacts of Policies for Prosecuting Wife Batterers.” Pp. 181–208 in Domestic Violence: The Changing Criminal Justice Response, edited by E. S. Buzawa and C. G. Buzawa. New York: Auburn. Foschi, Martha, G. Keith Warriner, and Stephen D. Hart. 1985. “Standards, Expectations, and Interpersonal Influence.” Social Psychology Quarterly 48 (2): 108–17. Fox, Katherine J. 1991. “The Politics of Prevention: Ethnographers Combat AIDS among Drug Users.” Pp. 227–49 in Ethnography Unbound: Power and Resistance in the Modern Metropolis, edited by M. Burawoy, A. Burton, A. A. Ferguson, K. J. Fox, J. Gamson, N. Gartrell, L. Hurst, C. Kurzman, L. Salzinger, J. Schiffman, and S. Ui. Berkeley: University of California Press. Frankel, Mark S., and Sanyin Siang. 1999. “Ethical and York: Apex Press. Geertz, Clifford. 1973. The Interpretation of Cultures. New York: Basic Books. Glaser, Barney, and Anselm Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Aldine. Glock, Charles Y., Benjamin B. Ringer, and Earl R. Babbie. 1967. To Comfort and to Challenge. Berkeley: University of California Press. Goffman, Erving. 1961. Asylums: Essays on the Social Situation of Mental Patients and Other Inmates. Chicago: Aldine. 1963. Stigma: Notes on the Management of a Spoiled Identity. Englewood Cliffs, NJ: Prentice-Hall. 1974. Frame Analysis. Cambridge, MA: Harvard University Press. 1979. Gender Advertisements. New York: Harper & Row. Gottlieb, Bruce. 1999. “Cooking the School Books: How U.S. News Cheats in Picking Its ‘Best American Legal Aspects of Human Subjects Research on the Colleges.’” Slate, August 31. http://www.slate.com/ Internet: A Report of a Workshop.” Washington, DC: crapshoot/99-08-31/crapshoot.asp. American Association for the Advancement of Sci- Graham, Laurie, and Richard Hogan. 1990. “Social Class ence, November. http://www.aaas.org/spp/dspp/ and Tactics: Neighborhood Opposition to Group sfrl/projects/intres/main.htm. Gall, John. 1975. Systemantics: How Systems Work and Especially How They Fail. New York: Quadrangle. Gamson, William A. 1992. Talking Politics. New York: Cambridge University Press. Gans, Herbert. 1971. The Uses of Poverty: The Poor Pay Homes.” Sociological Quarterly 31 (4): 513–29. Greenwood, Peter W., et al. 1994. Three Strikes and You’re Out: Estimated Benefits and Costs of California’s New Mandatory-Sentencing Law. Santa Monica, CA: Rand Corporation. Greenwood, Peter W., C. Peter Rydell, and Karyn Model. All. Social Policy, July–August, pp. 20–24. 1996. Diverting Children from a Life of Crime: Mea- 2002. “More of Us Should Become Public Sociolo- suring Costs and Benefits. Santa Monica, CA: Rand gists.” Footnotes, July–August (Washington, DC: American Sociological Association). http://www .asanet.org/footnotes/julyaugust02/fn10.html. Garant, Carol. 1980. “Stalls in the Therapeutic Process.” American Journal of Nursing, December, pp. 2166–67. Gard, Greta, ed. 1993. Ecofeminism: Women, Animals, Nature. Philadelphia: Temple University Press. Quoted in Linda J. Rynbrandt and Mary Jo Deegan. 2002. “The Ecofeminist Pragmatism of Caroline Bartlett Crane, 1896–1935.” American Sociologist 33 (3): 60. Garfinkel, H. 1967. Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Corporation. Griffith, Alison I. 1995. “Mothering, Schooling, and Children’s Development.” Pp. 108–21 in Knowledge, Experience, and Ruling Relations: Studies in the Social Organization of Knowledge, edited by M. Campbell and A. Manicom. Toronto, Canada: University of Toronto Press. Gubrium, Jaber F., and James A. Holstein. 1997. The New Language of Qualitative Method. New York: Oxford University Press. Hawking, Stephen. 2001. The Universe in a Nutshell. New York: Bantam Books. Gaventa, J. 1991. “Towards a Knowledge Democracy: Hedrick, Terry E., Leonard Bickman, and Debra J. Rog. Viewpoints on Participatory Research in North 1993. Applied Research Design: A Practical Guide. America.” Pp. 121–31 in Action and Knowledge: Break- Thousand Oaks, CA: Sage. REFERENCES Hempel, Carl G. 1952. “Fundamentals of Concept Formation in Empirical Science.” International Encyclopedia of United Science II, no. 7. Heritage, J. 1984. Garfinkel and Ethnomethodology. Cambridge: Polity Press. Heritage, John, and David Greatbatch. 1992. “On the Institutional Character of Institutional Talk.” In Talk at Work, edited by P. Drew and J. Heritage. Cambridge, England: Cambridge University Press. Higginbotham, A. Leon, Jr. 1978. In the Matter of Color: Race and the American Legal Process. New York: Oxford University Press. Hill, Lewis. 2000. Yankee Summer: The Way We Were: Growing up in Rural Vermont in the 1930s. Bloomington, IN: First Books Library. Hilts, Philip J. 1981. “Values of Driving Classes Disputed.” San Francisco Chronicle, June 25, p. 4. Hogan, Richard, and Carolyn C. Perrucci. 1998. “Producing and Reproducing Class and Status Differences: Racial and Gender Gaps in U.S. Employment and Retirement Income.” Social Problems 45 (4): 528–49. 529 Irwin, John, and James Austin. 1997. It’s About Time: America’s Imprisonment Binge. Belmont, CA: Wadsworth. Isaac, Larry W., and Larry J. Griffin. 1989. “A Historicism in Time-Series Analyses of Historical Process: Critique, Redirection, and Illustrations from U.S. Labor History.” American Sociological Association 54:873–90. Jackman, Mary R., and Mary Scheuer Senter. 1980. “Images of Social Groups: Categorical or Qualified?” Public Opinion Quarterly 44:340–61. Jackson, Jonathan. 2005. “Validating New Measures of the Fear of Crime.” International Journal of Social Research 8 (4): 297–315. Jacobs, Bruce A., and Jody Miller. 1998. “Crack Dealing, Gender, and Arrest Avoidance.” Social Problems 45 (4): 550–69. Jasso, Guillermina. 1988. “Principles of Theoretical Analysis.” Sociological Theory 6:1–20. Jensen, Arthur. 1969. “How Much Can We Boost IQ and Scholastic Achievement?” Harvard Educational Review 39:273–74. Howard, Edward N., and Darlene M. Norman. 1981. Jobes, Patrick C., Andra Aldea, Constantin Cernat, Ioana- “Measuring Public Library Performance.” Library Minerva Icolisan, Gabriel Iordache, Sabastian Lazeru, Journal, February, pp. 305–8. Catalin Stoica, Gheorghe Tibil, and Eugenia Udangiu. Howell, Joseph T. 1973. Hard Living on Clay Street. Garden City, NY: Doubleday Anchor. Huberman, A. Michael, and Matthew B. Miles. 1994. “Data Management and Analysis Methods.” Pp. 428– 44 in Handbook of Qualitative Research, edited by Norman K. Denzin and Yvonna S. Lincoln. Thousand Oaks, CA: Sage. Hughes, Michael. 1980. “The Fruits of Cultivation Analysis: A Reexamination of Some Effects of Television Watching.” Public Opinion Quarterly 44:287–302. Humphreys, Laud. 1970. Tearoom Trade: Impersonal Sex in Public Places. Chicago: Aldine. Hurst, Leslie. 1991. “Mr. Henry Makes a Deal.” Pp. 183– 202 in Ethnography Unbound: Power and Resistance in the Modern Metropolis, edited by M. Burawoy, A. Burton, A. A. Ferguson, K. J. Fox, J. Gamson, N. Gartrell, L. Hurst, C. Kurzman, L. Salzinger, J. Schiffman, and S. Ui. Berkeley: University of California Press. Iannacchione, Vincent G., Jennifer M. Staab, and David T. Redden. 2003. “Evaluating the Use of Residential 1997. “Shopping as a Social Problem: A Grounded Theoretical Analysis of Experiences among Romanian Shoppers.” Journal of Applied Sociology 14 (1): 124–46. Johnson, Jeffrey C. 1990. Selecting Ethnographic Informants. Thousand Oaks, CA: Sage. Johnston, Hank. 1980. “The Marketed Social Movement: A Case Study of the Rapid Growth of TM.” Pacific Sociological Review, July, pp. 333–54. Johnston, Hank, and David A. Snow. 1998. “Subcultures and the Emergence of the Estonian Nationalist Opposition 1945–1990.” Sociological Perspectives 41 (3): 473–97. Jones, James H. 1981. Bad Blood: The Tuskegee Syphilis Experiments. New York: Free Press. Kaplan, Abraham. 1964. The Conduct of Inquiry. San Francisco: Chandler. Kasl, Stanislav V., Rupert F. Chisolm, and Brenda Eskenazi. 1981. “The lmpact of the Accident at Three Mile Island on the Behavior and Well-Being of Nuclear Mailing Addresses in a Metropolitan Household Sur- Workers.” American Journal of Public Health, May, vey.” Public Opinion Quarterly 67:202–10. pp. 472–95. Ibrahim, Saad Eddin. 2003. “Letter from Cairo.” Contexts 2 (2): 68–72. Kasof, Joseph. 1993. “Sex Bias in the Naming of Stimulus Persons.” Psychological Bulletin 113 (1): 140–63. 530 REFERENCES Kebede, Alemseghed, and J. David Knottnerus. 1998. “Beyond the Pales of Babylon: The Ideational Components and Social Psychological Foundations of Rastafari.” Sociological Perspectives 42 (3): 499–517. Keeter, Scott. 2006. “The Impact of Cell Phone Noncoverage Bias on Polling in the 2004 Presidential Election.” Public Opinion Quarterly 70 (1): 88–98. Lee, Motoko Y., Stephen G. Sapp, and Melvin C. Ray. 1996. “The Reverse Social Distance Scale.” Journal of Social Psychology 136 (1): 17–24. Lengermann, Patricia Madoo, and Jill NiebruggeBrantley. 2002. “Back to the Future: Settlement Sociology, 1885–1930.” American Sociologist 33 (3): 5–20. Kentor, Jeffrey. 2001. “The Long Term Effects of Global- Lever, Janet. 1986. “Sex Differences in the Complexity of ization on Income Inequality, Population Growth, Children’s Play and Games.” Pp. 74–89 in Structure and Economic Development.” Social Problems 48 (4): and Process, edited by Richard J. Peterson and Char- 435–55. Khayatt, Didi. 1995. “Compulsory Heterosexuality: Schools and Lesbian Students.” Pp. 149–63 in Knowledge, Experience, and Ruling Relations: Studies in the Social Organization of Knowledge, edited by M. Camp- lotte A. Vaughan. Belmont, CA: Wadsworth. Lewins, Ann, and Christina Silver. 2006. “Choosing a CAQDAS Package.” July. http://caqdas.soc.surrey .ac.uk/. Libin, A., and J. Cohen-Mansfield. 2000. “Individual bell and A. Manicom. Toronto, Canada: University of versus Group Approach for Studying and Interven- Toronto Press. ing with Demented Elderly Persons: Methodological Kilburn, John C., Jr. 1998. “It’s a Matter of Definition: Dilemmas of Service Delivery and Organizational Structure in a Growing Voluntary Organization.” Journal of Applied Sociology 15 (1): 89–103. Kinnell, Ann Marie, and Douglas W. Maynard. 1996. “The Delivery and Receipt of Safer Sex Advice in Pretest Counseling Sessions for HIV and AIDS.” Journal of Contemporary Ethnography 24:405–37. Kinsey, Alfred C., et al. 1948. Sexual Behavior in the Human Male. Philadelphia: W. B. Saunders. 1953. Sexual Behavior in the Human Female. Philadelphia: W. B. Saunders. Kish, Leslie. 1965. Survey Sampling. New York: Wiley. Krueger, Richard A. 1988. Focus Groups. Thousand Oaks, CA: Sage. Kubrin, Charis E. 2005. “I See Death around the Corner: Nihilism in Rap Music.” Sociological Perspectives 48 (4): 433–59. Kubrin, Charis E., and Ronald Weitzer. 2003. “Retaliatory Homicide: Concentrated Disadvantage and Neighborhood Culture.” Social Problems 50 (2): 157–80. Kuhn, Thomas. 1970. The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Kvale, Steinar. 1996. InterViews: An Introduction to Qualitative Research Interviewing. Thousand Oaks, CA: Sage. Lakoff, George. 2002. Moral Politics: How Liberals and Conservatives Think. Chicago: University of Chicago Press. Laumann, Edward O., John H. Gagnon, Robert T. Michael, Perspectives.” Gerontologist, October 15, p. 105. Linton, Ralph. 1937. The Study of Man. New York: D. Appleton-Century. Literary Digest. 1936a. “Landon, 1,293,669: Roosevelt, 972,897.” October 31, pp. 5–6. 1936b. “What Went Wrong with the Polls?” November 14, pp. 7–8. Lofland, John. 2003. Demolishing a Historic Hotel. Davis, CA: Davis Research. Lofland, John, and Lyn H. Lofland. 1995. Analyzing Social Settings: A Guide to Qualitative Observation and Analysis. 3rd. ed. Belmont, CA: Wadsworth. Lofland, John, David Snow, Leon Anderson, and Lyn H. Lofland. 2006. Analyzing Social Settings: A Guide to Qualitative Observation and Analysis. 4th ed. Belmont, CA: Wadsworth. Lynd, Robert S., and Helen M. Lynd. 1929. Middletown. New York: Harcourt, Brace. 1937. Middletown in Transition. New York: Harcourt, Brace. Madison, Anna-Marie. 1992. “Primary Inclusion of Culturally Diverse Minority Program Participants in the Evaluation Process.” New Directions for Program Evaluation, no. 53, pp. 35–43. Mahoney, James, and Dietrich Rueschemeyer, eds. 2003. Comparative Historical Analysis in the Social Sciences. New York: Cambridge University Press. Manning, Peter K., and Betsy Cullum-Swan. 1994. “Narrative, Content, and Semiotic Analysis.” Pp. 463–77 in Handbook of Qualitative Research, edited by Norman and Stuart Michaels. 1994. The Social Organization of K. Denzin and Yvonna S. Lincoln. Thousand Oaks, Sexuality. Chicago: University of Chicago Press. CA: Sage. REFERENCES Marshall, Catherine, and Gretchen B. Rossman. 1995. Designing Qualitative Research. Thousand Oaks, CA: Sage. Marx, Karl. [1867] 1967. Capital. New York: International Publishers. [1880] 1956. Revue Socialist, July 5. Reprinted in Karl Marx: Selected Writings in Sociology and Social Philosophy, edited by T. B. Bottomore and Maximilien Rubel. New York: McGraw-Hill. McAlister, Alfred, Cheryl Perry, Joel Killen, Lee Ann Slinkard, and Nathan Maccoby. 1980. “Pilot Study of Smoking, Alcohol, and Drug Abuse Prevention.” American Journal of Public Health, July, pp. 719–21. McGrane, Bernard. 1994. The Un-TV and the 10 mph Car: Experiments in Personal Freedom and Everyday Life. Fort Bragg, CA: The Small Press. Meadows, Donella H., Dennis L. Meadows, and Jørgen 531 Morgan, David L., ed. 1993. Successful Focus Groups: Advancing the State of the Art. Thousand Oaks, CA: Sage. Morgan, Lewis H. 1870. Systems of Consanguinity and Affinity. Washington, DC: Smithsonian Institution. Moskowitz, Milt. 1981. “The Drugs That Doctors Order.” San Francisco Chronicle, May 23, p. 33. Moynihan, Daniel. 1965. The Negro Family: The Case for National Action. Washington, DC: U.S. Government Printing Office. Myrdal, Gunnar. 1944. An American Dilemma. New York: Harper & Row. Naisbitt, John, and Patricia Aburdene. 1990. Megatrends 2000: Ten New Directions for the 1990’s. New York: Morrow. Nature Conservancy. 2005. “The Nature of Science on the Sacramento River.” Nature Conservancy [newsletter], Spring–Summer. Randers. 1992. Beyond the Limits: Confronting Global Neuman, W. Lawrence. 1998. “Negotiated Meanings and Collapse, Envisioning a Sustainable Future. Post Mills, State Transformation: The Trust Issue in the Progres- VT: Chelsea Green. Meadows, Donella H., Dennis L. Meadows, Jørgen Rand- sive Era.” Social Problems 45 (3): 315–35. Nicholls, William L., II, Reginald P. Baker, and Jean Mar- ers, and William W. Behrens, III. 1972. The Limits to tin. 1996. “The Effect of New Data Collection Technol- Growth. New York: Signet Books. ogy on Survey Data Quality.” In Survey Measurement Menjívar, Cecilia. 2000. Fragmented Ties: Salvadoran Immigrant Networks in America. Berkeley: University of California Press. Merton, Robert K. 1938. “Social Structure and Anomie.” American Sociological Review 3:672–82. Milgram, Stanley. 1963. “Behavioral Study of Obedience.” Journal of Abnormal Social Psychology 67:371–78. 1965. “Some Conditions of Obedience and and Process Quality, edited by L. Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, and D. Trewin. New York: Wiley. “1 in 5 in New Survey Express Some Doubt about the Holocaust.” 1993. New York Times, April 2, p. A12. O’Neill, Harry W. 1992. “They Can’t Subpoena What You Ain’t Got.” AAPOR News 19 (2): 4, 7. Onwuegbuzie, Anthony J., and Nancy L. Leech. 2005. “On Disobedience to Authority.” Human Relations Becoming a Pragmatic Researcher: The Importance 18:57–76. of Combining Quantitative and Qualitative Research Miller, Delbert. 1991. Handbook of Research Design and Social Measurement. Thousand Oaks, CA: Sage. Mirola, William A. 2003. “Asking for Bread, Receiving a Methodologies.” International Journal of Research Methodology 8 (5): 375–87. Överlien, Carolina, Karin Aronsson, and Margareta Stone: The Rise and Fall of Religious Ideologies in Hydén. 2005. “The Focus Group Interview as an Chicago’s Eight-Hour Movement.” Social Problems 50 In-Depth Method? Young Women Talking about (2): 273–93. Sexuality.” International Journal of Social Research 8 Mitchell, Richard G., Jr. 1991. “Secrecy and Disclosure in Field Work.” Pp. 97–108 in Experiencing Fieldwork: An Inside View of Qualitative Research, edited by William B. Shaffir and Robert A. Stebbins. Thousand Oaks, CA: Sage. 2002. Dancing at Armageddon: Survivalism and Chaos in Modern Times. Chicago: University of Chicago Press. Mitofsky, Warren J. 1999. “Miscalls Likely in 2000.” Public Perspective 10 (5): 42–43. (4): 331–44. Perinelli, Phillip J. 1986. “Nonsuspecting Public in TV Call-in Polls.” New York Times, February 14, letter to the editor. Perrow, Charles. 2002. Organizing America: Wealth, Power, and the Origins of Corporate Capitalism. Princeton, NJ: Princeton University Press. Petersen, Larry R., and Judy L. Maynard. 1981. “Income, Equity, and Wives’ Housekeeping Role Expectations.” Pacific Sociological Review, January, pp. 87–105. 532 REFERENCES Picou, J. Steven. 1996a. “Compelled Disclosure of Rogers, Everett M., Peter W. Vaughan, Ramadhan M. Scholarly Research: Some Comments on High Stakes A. Swalehe, Nagesh Rao, and Suruchi Sood. 1996. Litigation.” Law and Contemporary Problems 59 (3): “Effects of an Entertainment-Education Radio Soap 149–57. Opera on Family Planning and HIV/AIDS Prevention 1996b. “Sociology and Compelled Disclosure: Pro- Behavior in Tanzania.” Report presented at a techni- tecting Respondent Confidentiality.” Sociological cal briefing on the Tanzania Entertainment-Education Spectrum 16 (3): 207–38. Project, Rockefeller Foundation, New York, March 27. Plutzer, Eric, and Michael Berkman. 2005. “The Graying of America and Support for Funding the Nation’s Support.” Public Opinion Quarterly 69:66–86. Polivka, Anne E., and Jennifer M. Rothgeb. 1993. “Redesigning the CPS Questionnaire.” Monthly Labor Review 116 (9): 10–28. “Poll on Doubt of Holocaust Is Corrected.” 1993. New York Times, July 8, p. A7. Population Communications International. 1996. International Dateline [February]. New York: Author. Powell, Elwin H. 1958. “Occupation, Status, and Suicide: Roper, Burns. 1992. “. . . But Will They Give the Poll Its Due?” AAPOR News 19 (2): 5–6. Rosenberg, Morris. 1968. The Logic of Survey Analysis. New York: Basic Books. Rosenthal, Robert, and Lenore Jacobson. 1968. Pygmalion in the Classroom. New York: Holt, Rinehart & Winston. Ross, Jeffrey Ian. 2004. “Taking Stock of Research Methods and Analysis on Oppositional Political Terrorism.” American Sociologist 35 (2): 26–37. Rossman, Gabriel. 2002. “The Qualitative Influence of Toward a Redefinition of Anomie.” American Sociolog- Ownership on Media Content: The Case of Movie ical Review 23 (4): 131–39. Reviews.” Paper presented to the American Socio- Presser, Stanley, and Johnny Blair. 1994. “Survey Pretesting: Do Different Methods Produce Different Results?” Pp. 73–104 in Sociological Methodology 1994, edited by Peter Marsden. San Francisco: Jossey-Bass. Prewitt, Kenneth. 2003. “Partisan Politics in the 2000 U.S. Census.” Population Reference Bureau, November. http://www.prb.org/Template.cfm?Section0001PRB&te mplate0001/Content/ContentGroups/Articles/03/ Partisan_Politics_in_the_2000_U_S__Census.htm. Quoss, Bernita, Margaret Cooney, and Terri Longhurst. 2000. “Academics and Advocates: Using Participatory Action Research to Influence Welfare Policy.” Journal of Consumer Affairs 34 (1): 47–61. Ragin, Charles C., and Howard S. Becker. 1992. What Is logical Association, Chicago. Reported in Contexts 2 (2, Spring 2003): 7. Rothman, Ellen K. 1981. “The Written Record.” Journal of Family History, Spring, pp. 47–56. Rubin, Herbert J., and Riene S. Rubin. 1995. Qualitative Interviewing: The Art of Hearing Data. Thousand Oaks, CA: Sage. Sacks, Jeffrey J., W. Mark Krushat, and Jeffrey Newman. 1980. “Reliability of the Health Hazard Appraisal.” American Journal of Public Health, July, pp. 730–32. Sanders, William B. 1994. Gangbangs and Drive-bys: Grounded Culture and Juvenile Gang Violence. New York: Aldine De Gruyter. Scarce, Rik. 1990. Ecowarriors: Understanding the Radical a Case? Exploring the Foundations of Social Inquiry. Environmental Movement. Chicago: Noble Press, 1990. Cambridge, England: Cambridge University Press. 1999. “Good Faith, Bad Ethics: When Scholars Go the Rasinski, Kenneth A. 1989. “The Effect of Question Word- Distance and Scholarly Associations Do Not.” ing on Public Support for Government Spending.” Law and Social Inquiry: Journal of the American Public Opinion Quarterly 53:388–94. Redfield, Robert. 1941. The Folk Culture of Yucatan. Chicago: University of Chicago Press. Reinharz, Shulamit. 1992. Feminist Methods in Social Research. New York: Oxford University Press. Riecken, Henry W., and Robert F. Boruch. 1974. Social Bar Foundation 24 (4): 977–86. Schiflett, Kathy L., and Mary Zey. 1990. “Comparison of Characteristics of Private Product Producing Organizations and Public Service Organizations.” Sociological Quarterly 31 (4): 569–83. Schmitt, Frederika E., and Patricia Yancey Martin. 1999. Experimentation: A Method for Planning and Evaluating “Unobtrusive Mobilization by an Institutionalized Social Intervention. New York: Academic Press. Rape Crisis Center: ‘All We Do Comes from Victims.’” Roethlisberger, F. J., and W. J. Dickson. 1939. Management and the Worker. Cambridge, MA: Harvard University Press. Gender and Society 13 (3): 364–84. Schutz, Alfred. 1967. The Phenomenology of the Social World. Evanston, IL: Northwestern University Press. REFERENCES 1970. On Phenomenology and Social Relations. Chicago: University of Chicago Press. 533 Legislation in Illinois.” Criminal Justice Policy Review 4 (1): 1–18. Sense, Andrew J. 2006. “Driving the Bus from the Rear Srole, Leo. 1956. “Social Integration and Certain Corol- Passenger Seat: Control Dilemmas of Participative laries: An Exploratory Study.” American Sociological Action Research.” International Journal of Social Research Methodology 9 (1): 1–13. Shaffir, William B., and Robert A. Stebbins, eds. 1991. Experiencing Fieldwork: An Inside View of Qualitative Research. Thousand Oaks, CA: Sage. Shea, Christopher. 2000. “Don’t Talk to the Humans: The Crackdown on Social Science Research.” Lingua Franca 10 (6): 27–34. Sherif, Muzafer. 1935. “A Study of Some Social Factors in Perception.” Archives of Psychology 27:1–60. Silverman, David. 1993. Interpreting Qualitative Data: Methods for Analyzing Talk, Text, and Interaction. Thousand Oaks, CA: Sage. Review 21:709–16. Stark, Rodney. 1997. The Rise of Christianity: How the Obscure, Marginal Jesus Movement Became the Dominant Religious Force in the Western World in a Few Centuries. San Francisco: HarperCollins. Stearns, Cindy A. 1999. “Breastfeeding and the Good Maternal Body.” Gender and Society 13 (3): 308–26. Strang, David, and James N. Baron. 1990. “Categorical Imperatives: The Structure of Job Titles in California State Agencies.” American Sociological Review 55:479–95. Strauss, Anselm, and Juliet Corbin. 1994. “Grounded Theory Methodology: An Overview.” Pp. 273–85 in Silverman, George. 2005. “Qualitative Research: Face- Handbook of Qualitative Research, edited by Norman to-Face Focus Groups, Telephone Focus Groups, K. Denzin and Yvonna S. Lincoln. Thousand Oaks, Online Focus Groups.” http://www.mnav.com/ CA: Sage. qualitative_research.htm. Accessed June 1. 1998. Basics of Qualitative Research: Techniques and Singhal, Arvind, and Elizabeth Rattine-Flaherty. 2006. “Pencils and Photos as Tools of Communicative Research and Praxis: Analyzing Minga Peru’s Quest for Social Justice in the Amazon.” International Communication Gazette 68 (4): 313–30. Skedsvold, Paula. 2002. “New Developments Concerning Public Use Data Files.” Footnotes 30 (1): 3. Skocpol, Theda. 2003. Diminished Democracy: From Procedures for Developing Grounded Theory. Thousand Oaks, CA: Sage. Strunk, William, Jr., and E. B. White. 1999. The Elements of Style. 4th ed. Boston: Allyn and Bacon. Survey Sampling, Inc. 2000. “Increase Response Rates of Online Sampling.” November, http://www.world opinion.com/the_frame/frame4.html. Swalehe, Ramadhan, Everett M. Rogers, Mark J. Gil- Membership to Management in American Civic Life. board, Krista Alford, and Rima Montoya. 1995. A Norman: Oklahoma University Press. Content Analysis of the Entertainment-Education Radio Smith, Andrew E., and G. F. Bishop. 1992. The Gallup Se- Soap Opera ‘Twende na Wakati’ (Let’s Go with the cret Ballot Experiments: 1944–1988. Paper presented at Times) in Tanzania. Arusha, Tanzania: Population the annual conference of the American Association Family Life and Education Programme (POFLEP), for Public Opinion Research, St. Petersburg, FL, May. Ministry of Community Development, Women Affairs, Smith, Dorothy E. 1978. The Everyday World as Problematic. Boston: Northeastern University Press. Smith, Vicki. 1998. “The Fractured World of the Temporary Worker: Power, Participation, and Fragmentation in the Contemporary Workplace.” Social Problems 45 (4): 411–30. Snow, David A., and Anderson, Leon. 1987. “Identity and Children, November 15. Takeuchi, David. 1974. “Grass in Hawaii: A Structural Constraints Approach.” M.A. thesis, University of Hawaii. Tan, Alexis S. 1980. “Mass Media Use, Issue Knowledge and Political Involvement.” Public Opinion Quarterly 44:241–48. Work among the Homeless: The Verbal Construction Tandon, Rajesh, and L. Dave Brown. 1981. “Organiza- and Avowal of Personal Identities.” American Journal tion-Building for Rural Development: An Experi- of Sociology 92:1336–71. ment in India.” Journal of Applied Behavioral Science, Sorokin, Pitirim A. 1937–1940. Social and Cultural Dynam- April–June, pp. 172–89. ics. 4 vols. Englewood Cliffs, NJ: Bedminster Press. Taylor, Humphrey, and George Terhanian. 1999. “Heady Spohn, Cassie, and Julie Horney. 1990. “A Case of Un- Days Are Here Again: Online Polling Is Rapidly Com- realistic Expectations: The Impact of Rape Reform ing of Age.” Public Perspective 10 (4): 20–23. 534 REFERENCES Thomas, W. I., and Florian Znaniecki. 1918. The Polish Peasant in Europe and America. Chicago: University of Chicago Press. Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10. Nuremberg, Warner, W. Lloyd. 1949. Democracy in Jonesville. New York: Harper. Webb, Eugene J., Donald T. Campbell, Richard D. Schwartz, and Lee Sechrest. 2000. Unobtrusive Measures. Rev. ed. Thousand Oaks, CA: Sage. October 1946–April 1949. 1949–1953. Washington, Weber, Max. [1905] 1958. The Protestant Ethic and the DC: U.S. Government Printing Office. http://www Spirit of Capitalism. Translated by Talcott Parsons. .ushmm.org/research/doctors/indiptx.htm. Tuckel, Peter S., and Barry M. Feinberg. 1991. “The New York: Scribner. [1925] 1946. “Science as a Vocation.” Pp. 129–56 in Answering Machine Poses Many Questions for Tele- From Max Weber: Essays in Sociology, edited and phone Survey Researchers. Public Opinion Quarterly translated by Hans Gerth and C. Wright Mills. New 55:200–217. Tuckel, Peter, and Harry O’Neill. 2002. “The Vanishing Respondent in Telephone Surveys.” Journal of Advertising Research, September–October, pp. 26–48. Turk, Theresa Guminski. 1980. “Hospital Support: York: Oxford University Press. [1934] 1951. The Religion of China. Translated by Hans H. Gerth. New York: Free Press. [1934] 1952. Ancient Judaism. Translated by Hans H. Gerth and Don Martindale. New York: Free Press. Urban Correlates of Allocation Based on Organi- [1934] 1958. The Religion of India. Translated by Hans zational Prestige.” Pacific Sociological Review, July, H. Gerth and Don Martindale. New York: Free pp. 315–32. Union of Concerned Scientists. 2005. “Political Interference in Science.” October 23. http://www.ucsusa .org/scientific_integrity/interference/. United Nations. 1995. “Human Development Report Press. Weiss, Carol. 1972. Evaluation Research. Englewood Cliffs, NJ: Prentice-Hall. Weitzman, Lenore J., Deborah Eifler, Elizabeth Hokada, and Catherine Ross. 1972. “Sex-Role Socialization 1995.” New York: United Nations Development Pro- in Picture Books for Preschool Children.” American gram. [Summarized in Population Communications Journal of Sociology 77:1125–50. International. 1996. International Dateline, February, pp. 1–4.] U.S. Bureau of the Census. 2001. Statistical Abstract of the United States. Washington, DC: U.S. Government Printing Office. 2006. Statistical Abstract of the United States. Washington, DC: U.S. Government Printing Office. U.S. News and World Report. 1999. “America’s Best Colleges.” August 30. Veroff, Joseph, Shirley Hatchett, and Elizabeth Douvan. 1992. “Consequences of Participating in a Longitudinal Study of Marriage.” Public Opinion Quarterly 56:325–27. Votaw, Carmen Delgado. 1979. Women’s Rights in the United States. United States Commission on Civil Wharton, Amy S., and James N. Baron. 1987. “So Happy Together? The Impact of Gender Segregation on Men at Work.” American Sociological Review 52:574–87. White, William S. 1997. Communication to APP-SOC listserv ([email protected]) from [email protected] .usouthal.edu, October 11. Whyte, W. F. 1943. Street Corner Society. Chicago: University of Chicago Press. Whyte, W. F., D. J. Greenwood, and P. Lazes. 1991. “Participatory Action Research: Through Practice to Science in Social Research.” Pp. 19–55 in Participatory Action Research, edited by W. F. Whyte. New York: Sage. Wieder, D. L. 1988. Language and Social Reality: The Case of Telling the Convict Code. Landman, MD: University Press of America. Rights, Inter-American Commission on Women. Wilson, Camilo. 1999. Private email, September 8. Washington, DC: Clearinghouse Publications. Wilson, Edward O. 1975. Sociobiology: The New Synthesis. Walker, Jeffery T. 1994. “Fax Machines and Social Surveys: Teaching an Old Dog New Tricks.” Journal of Quantitative Criminology 10 (2): 181–88. Walker Research. 1988. Industry Image Study. 8th ed. Indianapolis, IN: Walker Research. Wallace, Walter. 1971. The Logic of Science in Sociology. Chicago: Aldine-Atherton. Ward, Lester. 1906. Applied Sociology. Boston: Ginn. Cambridge, MA: Harvard University Press. Yammarino, Francis J., Steven J. Skinner, and Terry L. Childers. 1991. “Understanding Mail Survey Response Behavior: A Meta-Analysis.” Public Opinion Quarterly 55:613–39. Yinger, J. Milton, et al. 1977. Middle Start: An Experiment in the Educational Enrichment of Young Adolescents. London: Cambridge University Press. INDEX A Authority, 8 Abstracts, 472 Autoethnography, 322 Accuracy in measurement, 156–157 Auto-kinetic effects, 44 Ackroyd, Dan, 260 Average, 450 Aggregates, 15–16, 104 Axial coding, 423 American Almanac, The, 367 American Democracy (Skocpol), 374 B Aminzade, Ron, 375–376 Babbie, Earl R., 50, 69, 84, 86 Analysis of data, 122, 124–125 Bailey, William, 406–407 ethics, 73–74 Ball-Rokeach, Sandra, 264 qualitative data analysis. See Qualitative data analysis. Baron, James, 180 quantitative data analysis. See Quantitative data Bart, Pauline, 395 analysis. reading social research, 478 Analysis of existing statistics, 362–369 Basics of Qualitative Research (Strauss/Corbin), 418 Becker, Howard, 326 Belenky, Mary Field, 41 Durkheim’s study of suicide, 362–364 Bell, Derrick, 42 globalization, consequences of, 364–365 Bellah, Robert, 319, 372–373, 375, 378 reading social research, 477 Benton, J. Edwin, 281 reliability, 366 Berg, Bruce, 169, 353, 357, 359 sources of existing statistics, 366–369 Berkman, Michael, 114 units of analysis, 365 Best American Colleges, 181 validity in, 365 Beveridge, W. I. B., 48 Analytic induction, 359 Bian, Yanjie, 111, 274 Analyzing Social Settings (Lofland), 315 Bias Anderson, Eric, 322 Anderson, Leon, 321, 322 in sampling, 208–209 survey research, 277–278 Anderson, W., 10 Biddle, Stuart, 26 Andorka, Rudolf, 386 Bielby, Denise, 153 Anomic suicide, 364 Bielby, William, 153 Anomie, 142–144, 364 Bin Laden, Osama, 39 Anonymity, 69–70 Birthrates, 15–16 Applied research, 27–28 Bishop, G. F., 274 Aronsson, Karin, 339 Bivariate analysis, 459–463 Articles, 487 Bivariate relationships, 174–177 Asch Experiment, 42–43 Black, Donald, 366 Asher, Ramona, 112 Blair, Johnny, 283, 302 Attributes, 17, 149 Blodgett, Timothy, 237 Attribution process, 260 Blumenthal, Mark, 301 Auster, Carol, 359 Bogardus, Emory, 187 535 536 INDEX Bogardus social distance scale, 186–187 Cohort studies, 113–114 Bollen, Kenneth, 175 Coleman, James, 83–84 Bolstein, Richard, 289 Collins, G. C., 237 Book of Leviticus, 424–426, 429–433 Comfort Hypothesis, 50–51, 57, 69, 459, 463 Books in Print, 498 Communication. See Reading social research. Boruch, Robert, 388 Comparative and historical research, 369–378 Branch Davidians, 322 analytical techniques, 376–378 Brown, L. Dave, 395 defined, 369 Brown vs. Board of Education of Topeka, 42, 83 examples of, 369–374 Budget in research proposals, 125 reading social research, 477 Burawoy, Michael, 326 sources of, 374–376 Burneson, Ray, 402 Computer-assisted personal interviewing (CAPI), 299 Bush, George W., 200, 201 Computer-assisted self-interviewing (CASI), 299 Computer-assisted telephone interviewing (CATI), 297, 298 C Calvin, John, 372 Computerized library files, 501–503 Campbell, Donald, 253, 254, 256, 257, 258, 259 Computerized Physician Order Entry (CPOE) systems, 7 Campbell, M. L., 328 Computerized self-administered questionnaire (CSAQ), CAPI, 299 299 Carpini, Michael, 112 Computer programs for qualitative data analysis, 428–438 Carr, C. Lynn, 116 Computer self-administered questionnaires, 299 Carroll, Lewis, 46 Computer simulations, 407–408 Case-oriented analysis, 417 Comte, Auguste, 36, 39, 42, 369, 370 Case studies, 326–328 Conception measurements, 133–134 CASI, 299 Concept mapping, 427–428 CATI, 297, 298 Concept measurements, 133–136 Causal reasoning, 6 Conceptualization, 131–132, 136 Causation, 19 anomie, 142–144 Cause and effect indicators, 175 in content analysis, 357 Census, U. S., 85–86 creating conceptual order, 140–142 Childers, Terry, 289 definitions in descriptive and explanatory studies, Chirot, Daniel, 138 145–147 Chisolm, Rupert, 263 dimensions, 137–139 Chi square distribution, 508–509 example of, 142–144 Chossudovsky, Michel, 37 indicators, 136–137, 139 Church involvement study, 50–51, 57, 69, 459, 463 nominal definitions, 140 Clark, Roger, 108 operational definitions, 140 Closed-ended questions, 272–273 in project design, 120 Cluster sampling, 231–238, 355 real definitions, 140 Codebooks, 447–448 Confidence intervals, 219–221 Code notes, 427 Confidence levels, 219–221 Codes of ethics, 77–79 Confidentiality, 70–72 Coding Conflict paradigm, 37 axial, 423 Conrad, Clifton F., 325 in content analysis, 355–359 Constant comparative method, 418 open, 423 Constructed knowledge, 41 in qualitative data analysis, 422–426 Constructs, 135 selective, 424 Construct validity, 161 Coefficient of reproducibility, 192 Cohen-Mansfield, J., 23 Content analysis, 350–362 coding, 355–359 INDEX conceptualization, 357 counting and record keeping, 357–359 defined, 350 quantitative data analysis. See Quantitative data analysis. reading social research, 478 examples of, 359–361 Data processing, 121–122, 124 latent content, 356–357 Davern, Michael, 289 manifest content, 356 Davis, Fred, 318 qualitative data analysis, 359 Davis, James, 113–114, 305 reading social research, 477 Death penalty as deterrence, 406–407 sampling, 352–355 Debriefing, 73 strengths and weaknesses of, 361–362 Deception, 72–73 topics in, 350–351 De Coster, Stacy, 180 units of analysis, 352–355 Deduction, 23–25 Content validity, 161 Contingency questions, 279–280 induction compared, 48–54 theory construction, 54–56 Contingency tables, 462 Deflem, Mathieu, 373 Continuous variables, 454 DeFleur, Lois, 366 Control groups, 115, 248–249 Delgado, Richard, 42 Controlled experiments. See Experiments. Democracy in America (de Tocqueville), 374 Control variable. See Variables. Demographic Yearbook, 368 Conversation analysis (CA), 421 DeNuzzo, Rinaldo V., 221 Cook, Thomas, 254, 256 Dependent variables, 19, 247 Cooley, Charles Horton, 38, 260 Description research, 99, 145–147 Cooney, Margaret, 330 Design of research. See Research design. Copernicus, 35, 401 De Tocqueville, Alexis, 374 Corbin, Juliet, 324, 416, 418, 423, 426 Dewey, Thomas E., 202, 203 Correlation, 100 Diagnostics, sociological, 464–466 Cost-benefit studies, 385 Dialectics, 22–28 Couper, Mick, 299, 300 applied research, 27–28 Cox, James, 201 deduction. See Deduction. CPOE systems, 7 idiographic explanations, 22–23, 24 Craig, R. Stephen, 359–360 induction. See Induction. Crawford, Kent, 397 nomothetic explanations, 22–23, 24 Creation myth, 44 pure research, 27–28 Criterion-related validity, 161 qualitative data, 25–27 Critical race theory, 42 537 quantitative data, 25–27 Cross-case analysis, 416 Dickson, W. J., 249 Cross-sectional studies, 111–112 Dillman, Don, 289 CSAQ, 299 Dimensions, 137–139, 148–149 Cullum-Swan, Betsy, 419 Direct observables, 134–135 “Cult of the individual,” 35 Disconfirmability, 47 Curtin, Richard, 297 Discovery of Grounded Theory, The (Glaser/Strauss), 324 Discrete variables, 454 D Dispersion, 453–454 Daly, John, 281 Distributive justice, 54–56 Danieli, Ardha, 331 Double-barreled questions, 273–274, 275 Dannemeyer, William, 85 Double-blind experiments, 249–250 Darwin, 35 Douvan, Elizabeth, 115 Data analysis, 122, 124–125 Doyle, Sir Arthur Conan, 53 ethics, 73–74 Dual consciousness, 42 qualitative data analysis. See Qualitative data analysis. DuBois, W. E. B., 42 538 INDEX Dunlap, Riley E., 48 Durkheim, Emile, 40, 53, 142, 144, 349, 362–364, 365, 369 Ethnography, 321 autoethnography, 322 institutional, 322, 328–329 virtual ethnography, 322 E Eastman, Crystal, 28 Ethnomethodology, 38–39, 322–324 Ecological fallacy, 109–110 Etic perspective, 319 Economic Ethic of the World Religions, The (Bellah), 372 Evaluation research, 383–409 Ecowarriors: Understanding the Radical Environmental Movement (Scarce), 71 computer simulations, 407–408 defined, 384 Edwards, Jennifer, 138 ethics in, 408–409 Einstein, 35 experimental designs, 390–391 Elements in sampling, 211 interventions, 388–389 Elements of Style, The (Strunk/White), 486 logistics, 397–399 Emancipatory research, 331 measurements, 386–390 Emerson, Robert, 38 multiple time-series designs, 394–395 Emic perspective, 319 nonequivalent control groups, 394 Empirical support for reality, 6 operationalizing success and failure, 389–390 Epistemology, 6 outcomes, specifying, 387–388 Equal probability of selection method (EPSEM), populations, 389 210–211 qualitative evaluations, 395–397 Eskenazi, Brenda, 263 quasi experiments, 391–395 Estimated sampling error, 511 reading social research, 477–478 Ethics, 22, 65–81 results, use of, 400–405 analysis, 73–74 Sabido methodology, 405 anonymity, 69–70 social indicators research, 406–408 confidentiality, 70–72 time-series design, 392–393 deception, 72–73 topics in, 385–386 defined, 67 Tuskegee syphilis program, 409 in evaluation research, 408–409 Expectations communication model, 260 in experiments, 265 Experiments, 245–265 harm to participants, 68–69, 75 classical experiments, 246–250 homosexual behavior studies, 79–80 control groups, 248–249 human obedience observations, 80–81 dependent variables, 247 human sexuality research, 74–75 double-blind experiments, 249–250 institutional review boards, 74–77 ethics in, 265 of measurement, 164 in evaluation research, 390–391 professional codes, 77–79 example of, 259–262 in qualitative data analysis, 438–439 experimental groups, 115, 248–249 in qualitative field research, 345 external invalidity, 257–259 in quantitative data analysis, 466–467 field experiments, 259 in reading social research, 493 independent variables, 247 reporting, 73–74 internal invalidity, 254–257 in research design, 125 matching, 251–253 sampling, 238–239 “natural” experiments, 246, 263–264 survey research and, 307 posttesting, 247–248 Tearoom Trade (Humphreys), 79–80 preexperimental research designs, 253–254 theory and, 60 pretesting, 247–248 in unobtrusive research, 378–379 probability sampling, 250–251 voluntary participation, 67–68 randomization, 251, 252–253 in writing social research, 493 reading social research, 475–476 INDEX strengths and weaknesses of method, 264–265 Gender Empowerment Measure (GEM), 185, 186 subject selection, 250–253 Gender-related Development Index (GDI), 185 topics appropriate to, 246 General Social Survey (GSS), 20 validity, 254–259 Genocide defined, 138–139 web-based, 262 Glaser, Barney, 56, 324, 359, 418, 423 Explanation research, 99, 145–147 Globalization, consequences of, 364–365 Explanatory social research, 22 Glock, Charles, 50, 459, 463 Exploration research, 97–99 Goffman, Erving, 56–57, 419–421 Exploratory social research, 21–22 Gottlieb, Bruce, 181 Extended case method, 326–328 Greatbatch, David, 39 External invalidity, 257–259 Greenwood, Peter, 400 External validation, 184 Griffin, Larry, 378 Exxon Valdez oil spill, 70–71 Griffith, Alison, 328–329 Grounded theory, 56, 324–326, 417–418 F Groups as units of analysis, 106, 107 Face validity, 160, 173 Groupthink, 339 Fair Lady, My, 259 Grube, Joel, 264 Farquharson, Karen, 205 Guttman, Louis, 190 Fausto-Sterling, Anne, 85 Guttman scale, 190–193 Feinberg, Barry, 297 Feminist paradigms, 40–42 H Ferris, Kerry, 38 Haley, Alex, 264 Festinger, Leon, 112 Harding, Warren, 201 Field experiments, 259 Harm to participants, 68–69, 75 Fielding, Nick, 306–307 Hart, Stephen, 260 Field research, 56. See also Qualitative field research. Hatchett, Shirley, 115 Fine, Gary, 112 Hawking, Stephen, 45 Fink, Jeffrey, 397 Hawthorne effect, 249 Fisher, Patricia, 146 Helms, Jesse, 85 Focus groups, 97, 338–339 Hempel, Carl, 140 Ford, David, 403–404 Heritage, John, 39 Foschi, Martha, 260 Hermeneutic circle, 141 Foundations of social science. See Social science. Higginbotham, Leon Jr., 375 Fox, Katherine, 327–328 Hill, Lewis, 9 Freire, Paulo, 332 Hilts, Philip, 402 Frequency distributions, 450 Historical research. See Comparative and historical G “History,” 44 Gall, John, 489 Holmes, Sherlock, 53–54 Gallup, George, 202, 203, 205, 270 Homosexual behavior studies, 79–80, 84–85 Gambler’s fallacy, 9 Horney, Julie, 402–403 Gamson, William, 339 Howard, Edward, 387 Gans, Herbert, 59, 60, 88 Howell, Joseph, 343 GapMinder software, 463 Huberman, A. Michael, 416–417 Gard, Greta, 40 Human obedience observations, 80–81 Gardner, Carol Brooks, 38 Human sexuality research ethics, 74–75 Garfinkel, Harold, 38, 323 Humphreys, Laud, 79, 84 GDI, 185 Hurst, Leslie, 326–327 Geertz, Clifford, 162 Hussein, Saddam, 39 Gender Advertisements (Goffman), 419 Hydén, Margareta, 339 research. 539 540 INDEX Hypotheses, 45, 48–49, 101–102 Internal invalidity, 254–257 Hypothesis testing, 48 International Monetary Fund (IMF), 37 International Policing (Deflem), 373 I Internet research, 478–485 Iannacchione, Vincent, 231 Interval measures, 150–151, 152 Ibrahim, Saad Eddin, 27 Interviews, qualitative, 335–338 Idealistic point of view, 370 Interview surveys, 291–295 Ideal types, 377 appearance and demeanor, 292 Idiographic explanations, 22–23, 24 computer-assisted personal interviewing, 299 Illogical reasoning, 9–10 computer-assisted self-interviewing, 299 Inaccurate observations, 8 computer-assisted telephone interviewing, 297, 298 Independent variables, 19, 247 coordination and control, 294–295 Indexes, 169–186 familiarity with questionnaire, 292–293 bad indexes, 184–185 following question wording, 293 bivariate relationships, 174–177 guidelines for, 292–294 cause and effect indicators, 175 probing for responses, 293–294 construction example, 185–186 recording responses, 293 role of interviewer, 291–292 defined, 171 empirical relationships, examination of, 174–179 Iran, sampling, 232 external validation, 184 IRB, 74–77, 125 face validity, 173 Isaac, Larry, 378 generality, 173–174 Item analysis, 183–184 item analysis, 183–184 item selection, 173–174 J missing data, handling, 180–182 Jackson, Jonathan, 137 multivariate relationships, 177–179 Jacobson, Lenore, 260 scales compared, 170–173 Jasso, Guillermina, 54–55 scoring, 179–180 Jensen, Arthur, 84 specificity, 173–174 Jobes, Patrick C., 325 unidimensionality, 173 Johnson, Jeffrey, 206–207 validation, 182–185 Jones, Bill, 400 validators, bad, 184–185 Judgmental sampling, 204–205 variance, 174 Indicators, 136–137 K interchangeability of, 139 Kaplan, Abraham, 134, 135 single or multiple, 154 Kasl, Stanislav, 263 social, 406–408 Kasof, Joseph, 278 Indirect observables, 135 Kebede, Alemseghed, 58 Individual rights, 35 Keeter, Scott, 112, 223 Individuals as units of analysis, 105–106, 107 Kentor, Jeffrey, 364 Induction, 23–25 Kerry, John, 223 analytic, 359 Khayatt, Didi, 329 deduction compared, 48–54 Kim IL Sung, 27 theory construction, 56–58 Kinnell, Ann Marie, 421 Informants, 206–207 Kinsey, Alfred, 84 Informed consent, 69, 74 Knottnerus, J. David, 58 InfoTrac College Edition, 472 Koppel, Ross, 7 Institutional ethnography, 322, 328–329 Koresh, David, 322 Institutional review boards (IRB), 74–77, 125 Krueger, Richard, 339 Interest convergence, 42 Krushat, W. Mark, 158 INDEX Kubrin, Charis, 105, 360–361 Lynd, Helen, 326 Kuhn, Thomas, 35 Lynd, Robert, 326 541 Kvale, Steinar, 336, 338 M L Machine-readable form, 443 Lakoff, George, 140 Macrotheory, 36 Landon, Alf, 199, 201, 271 Madison, Anna-Marie, 387 Language and Social Reality: The Case of Telling the Convict Mahoney, James, 369 Code (Wieder), 323 Mail distribution and return of questionnaires, 286–287 Laslett, Barbara, 375–376 Manifest content, 356 Latent content, 356 Manning, Peter, 419 Laumann, Edward O., 85 Marijuana smoking, 57–58 Lazes, Peter, 330 Marsden, Peter V., 305 Lee, Motoko, 187 Marshall, Catherine, 140, 317, 318 Leech, Nancy, 27 Martin, Patricia Yancey, 59 Lennon, Rachel, 108 Marx, Karl, 33, 37, 38, 270, 369, 371 Lever, Janet, 41 Matching in experiments, 251–253 Leviticus example, 424–426, 429–433 Matrix questions in questionnaires, 280–281 Libin, A., 23 Maynard, Douglas, 421 Library usage, 498 McGrane, Bernard, 39 Books in Print, 498 McVeigh, Timothy, 322 card catalogs, 500 Mead, George Herbert, 33, 38, 41, 260 computerized library files, 501–503 Meadows, Dennis, 408 Library of Congress classification system, 500–501 Meadows, Donella, 408 online full-text resources, 503–505 Mean, 450 Readers’ Guide to Periodical Literature, 498–500 Measurement, 131–133 reference librarians, 498 accuracy, 156–157 reference sources, 498–500 conceptions, 133–134 using the “stacks,” 500–501 concepts, 133–136 Likert, Rensis, 188 ethics of, 164 Likert scale, 188–189 in evaluation research, 386–390 Linking theory and qualitative data analysis, 416–421 interval measures, 150–151, 152 Linton, Ralph, 40 nominal measures, 149–150, 152 Literature reviews operationalization, 149–153 organizing, 471–472 ordinal measures, 150, 152 in research proposals, 124 precision, 156–157 writing social research, 488 quality of criteria, 156–164 Lofland, John, 315, 319, 331, 337, 342, 345, 416 ratio measures, 151, 152 Logical reasoning in analysis of existing statistics, 365 reading social research, 475 Logical support for reality, 6 reality, 133–134 Logistics in evaluation research, 397–399 reliability, 157–159, 163–164 Longhurst, Terri, 330 split-half method, 159 Longitudinal studies, 112–118 test-retest method, 158–159 approximating, 115–116 validity of, 160–164 cohort studies, 113–114 Median, 451 comparing types of, 115 Megatrends 2000 (Naisbitt/Aburdene), 351 panel studies, 114–115 Memoing, 426–427 strategy examples, 117 Menjivar, Cecilia, 335 trend studies, 112–113 Merton, Robert, 142 “Looking-glass-self,” 38 Meta-analysis, 306 542 INDEX Methodology, 6 purposive sampling, 204–205 Microtheory, 36 quota frame, 206 Miles, Matthew, 416–417 quota sampling, 202, 205–206 snowball sampling, 205 Milgram, Stanley, 73, 80–81 Miller, Delbert, 289 Normal curve, 217 Mitchell, Richard, 274, 322 Normal curve areas, 510 Mitofsky, Warren, 300 Norman, Darlene, 387 Mode, 451 NUD*IST, 429–433 Modern view of reality, 10–11 Null hypothesis, 49 Monitoring studies, 385 NVivo, 433–438 Morgan, David, 339 Morgan, Lewis, 369 O Morris, Leana, 108 Objectivity, 43–44, 82–86 Moynihan, Daniel, 84 O’Brien, Patricia, 395 Multiple time-series designs, 394–395 Observations, 46–47, 121 Multistage cluster sampling, 231–238 One-group pretest-posttest design, 253, 255 Multivariate analysis, 463–464 O’Neill, Harry, 72, 289 Multivariate relationships, 177–179 One-shot case study, 253, 255 Murphy, Eddie, 260 Online full-text resources, 503–505 My Lai massacre, 80 Onwuegbuzie, Anthony, 27 Myrdal, Gunnar, 83 Open coding, 423 Open-ended questions, 272 N Operational definitions, 46, 140 Naive realism, 10 Operationalization, 45–46, 131–132, 147 “Natural” experiments, 246, 263–264 attributes, defining, 149 Naturalism, 321–322 choices of, 154–155 Nazi medical experiments, 65 dimensions, 148–149 Necessary cause, 102–103 extremes, variations between, 148 Needs assessment studies, 385 indicators, single or multiple, 154 Negative case testing, 359 interval measures, 150–151, 152 Neuman, W. Lawrence, 58 measurement of, 149–153 Newman, Jeffrey, 158 nominal measures, 149–150, 152 Newton, 35 ordinal measures, 150, 152 Nihilism, 361 in project design, 120–121 Nominal definitions, 140 ratio measures, 151, 152 Nominal measures, 149–150, 152 variables, defining, 149 variation, range of, 147–148 Nomothetic explanations, 22–23, 24, 99–100 causal analysis and hypothesis testing, 101–102 Operational notes, 427 complete causation, 102 Ordinal measures, 150, 152 correlation, 100 Organizations as units of analysis, 106–107 criteria for nomothetic causality, 100–101 Outcome assessment, 385 exceptional cases, 102 Overgeneralization, 8–9 false criteria, 102 Överlien, Carolina, 339 spurious relationships, 100–101 time order, 100 P Nonequivalent control groups, 394 Panel studies, 114–115 Nonprobability sampling, 203–207 Paradigms, 33–48 available subjects, reliance on, 203–204 conflict paradigm, 37 informants, 206–207 critical race theory, 42 judgmental sampling, 204–205 early positivism, 36–37 INDEX ethnomethodology, 38–39 confidence intervals, 219–221 feminist paradigms, 40–42 confidence levels, 219–221 macrotheory, 36 elements, 211 microtheory, 36 equal probability of selection method, 210–211 rational objectivity, 42–45 experiments, 250–251 scientific method, 45–48 normal curve, 217 structural functionalism, 39–40 parameters, 212 symbolic interactionism, 37–38 population, 211 Parameters, 212 probability theory, 212–213 Participant observation, 317 random-digit dialing, 212 Participants in studies, harm to, 68–69, 75 random selection, 200, 211–212 Participatory action research (PAR), 82, 329–332 representativeness, 210 Percentage down, 460 sampling distributions, 212–218 Percentaging a table, 460–462 sampling error, 216–218, 220 Perinelli, Phillip, 209 sampling unit, 212 Periodicity in sampling, 225 statistics, 216 543 study population, 211 Perrow, Charles, 373 Perspectives, 11 Probing for interview responses, 293–294 Peterson, Ruth, 407 Procedural knowledge, 41 Picou, Steven J., 71 Professional codes of ethics, 77–79 Placebo, 249 Professional papers, 487 Plagiarism, 489–490 Program evaluation, 385 Plutzer, Eric, 114 Project design, 117–123 Points of view, 11 analysis, 122 Politics of social research, 81–88 conceptualization, 120 little “p” politics, 86–87 conclusions and reports, 122 objectivity and ideology, 82–86 data processing, 121–122 in perspective, 87–88 method choice, 120 Population, 121 observations, 121 Populations in sampling, 211, 221–223, 230 operationalization, 120–121 Porter, Stephen, 301 population for study, 121 Positivism, 36–37, 44 sampling, 121 triangulation, 123 Postmodern view of reality, 11–12 Posttesting, 247–248 Proposals, research. See Research proposals. Posttest-only control-group design, 259 Protestant Ethic and the Spirit of Capitalism, The (Weber), 372 Powell, Elwin, 142–143 PPS sampling, 235–236 Pure research, 27–28 Precision in measurement, 156–157 Purposive sampling, 204–205 Predestination, 372 Pygmalion (Shaw), 259 Preexperimental research designs, 253–254 Pygmalion effect, 260 “Pregnant chads,” 160 Premodern view of reality, 10 Q Presser, Stanley, 283, 297 QDA programs, 428–429 Pretesting, 247–248, 283 Qualitative data, 25–27 Prewitt, Kenneth, 85 Qualitative data analysis, 359, 415–439 Primary group, 38 case-oriented analysis, 417 Probabilistic reasoning, 6 coding, 422–426 Probability proportionate to size (PPS) sampling, 235–236 computer programs for, 428–438 Probability sampling, 200, 207–221, 238 concept mapping, 427–428 bias, 208–209 constant comparative method, 418 544 INDEX conversation analysis, 421 don’t know/no opinion, 457–458 cross-case analysis, 416 ethics, 466–467 defined, 415 multivariate analysis, 463–464 ethics in, 438–439 numerical descriptions in qualitative research, 458 grounded theory method, 417–418 qualitative analysis of, 438 Leviticus example, 424–426, 429–433 quantification of data, 443–448 linking theory and analysis, 416–421 sociological diagnostics, 464–466 memoing, 426–427 subgroup comparisons, 455–458 NUD*IST, 429–433 univariate analysis, 448–455. See also Univariate analysis. numerical descriptions in, 458 NVivo, 433–438 Quasi experiments, 391–395 patterns, discovering, 416–417 Questionnaires, 272, 278–286 processing, 421–428 contingency questions, 279–280 QDA programs, 428–429 format, 278–279 of quantitative data, 438 instructions, 282–283 semiotics, 419–421 matrix questions, 280–281 variable-oriented analysis, 416 order of items, 281–282 Qualitative evaluations, 395–397 pretesting, 283 Qualitative field research, 313–345 sample, 283–286 self-administered. See Self-administered questionnaires. case studies, 326–328 defined, 314 Quoss, Bernita, 330 ethics in, 345 Quota frame, 206 ethnomethodology, 322–324 Quota sampling, 202, 205–206 extended case method, 326–328 focus groups, 338–339 R grounded theory, 324–326 Race and social research, 83–84 institutional ethnography, 322, 328–329 Radcliffe-Brown, Alfred, 144 interviews, 335–338 Ragin, Charles, 326 naturalism, 321–322 Random-digit dialing, 212 observer roles, 317–319 Randomization, 251, 252–253 participatory action research, 329–332 Random numbers table, 506–507 preparation, 333–335 Random selection, 200, 211–212 reading social research, 476–477 Rape reform legislation, 402–403 recording observations, 340–342 Rasinski, Kenneth, 277 reliability, 344–345 Rastafarianism, 58–59 strengths and weaknesses, 342–345 Ratio measures, 151, 152 subjects, relation to, 319–321 Rational objectivity, 42–45 topics in, 314–316 Rattine-Flaherty, Elizabeth, 332 validity, 343–344 Ray, Melvin, 187 Qualitative interview, 335–338 Reactivity, 317 Quantification of data, 443–448 Readers’ Guide to Periodical Literature, 498–500 Quantitative data, 25–27 Reading social research, 471–485 Quantitative data analysis, 443–467 analyzing existing statistics, 477 bivariate analysis, 459–463 books, 473–474 codebook construction, 447–448 comparative and historical research, 477 code category development, 445–447 content analysis, 477 collapsing response categories, 456–457 data analysis, 478 data entry, 448 design of research, 474 defined, 443 ethics, 493 INDEX evaluation research, 477–478 method choice, 120 experiments, 475–476 necessary cause, 102–103 field research, 476–477 nomothetic explanations. See Nomothetic Internet usage, 478–485 explanations. journal articles, 472–473 observations, 121 literature reviews, organizing, 471–472 operationalization in project design, 120–121 measurements, 475 panel studies, 114–115 reporting, 478 population for study, 121 reports, evaluation of, 474–478 project design, 117–123 sampling, 475 proposals. See Research proposals. survey research, 476 purposes of, 97–99 theoretical orientations, 474 sampling, 121 Real definitions, 140 strategy examples, 117 Reality, 6–12 sufficient cause, 102–103 authority, 8 trend studies, 112–113 inquiry, errors in, 8–10 triangulation, 123 inquiry, ordinary human, 6–7 units of analysis. See Units of analysis. measurement, 133–134 Research monographs, 473 modern view, 10–11 Research proposals, 123 postmodern view, 11–12 analysis, 124–125 premodern view, 10 budget, 125 selective observation, 9 data-collection, 124 tradition, 7–8 institutional review boards, 125 Received knowledge, 41 literature review, 124 Redden, David, 231 measurements, 124 Redfield, Robert, 369 objective of, 124 Reductionism, 110–111 problem to be studied, 124 Reflexivity, 320 schedule, 125 Regoli, Mary Jean, 403–404 subjects for study, 124 Reicker, Henry, 112 Respondents in survey research, 270–271 Reification, 136 Response rates to interviews surveys, 297–299 Reliability Response rates to questionnaires, 288–289 in analysis of existing statistics, 366 Response variable, 387 in measurement, 157–159, 163–164 Riecken, Henry, 388 of qualitative field research, 344–345 Ringer, Benjamin, 50 Replication, 9, 365 Rise of Christianity, The (Stark), 373 Reports Roethlisberger, F. J., 249 ethics, 73–74 Rokeach, Milton, 264 evaluations of, 474–478 Roosevelt, Franklin D., 199, 202, 271 Representativeness, 210 Roots: The Next Generation, 264 Research design, 96–125 Roots (Haley), 264 analysis of data, 122, 124–125 Roper, Burns, 72 cohort studies, 113–114 Rosenberg, Morris, 109 conceptualization in project design, 120 Rosenthal, Robert, 260 conclusions and reports, 122 Ross, Jeffrey, 96 cross-sectional studies, 111–112 Rossman, Gabriel, 351 data processing, 121–122, 124 Rossman, Gretchen, 140, 317, 318 ethics of, 125 Rothman, Ellen, 374 longitudinal studies. See Longitudinal studies. Rubin, Herbert, 335, 337 545 546 INDEX Rubin, Riene, 335, 337 stratification, 227, 234–235 Rueschemeyer, Dietrich, 369 stratified sampling, 227–230 Ryerson, William N., 399 study population, 211 systematic sample with a random start, 224, 229 S systematic sampling, 224–227, 229–230 Sabido, Miguel, 405 types of sampling methods, 203, 223–231 Sabido methodology, 405 weighting, 236–238 Sacks, Jeffrey, 158 Sanders, William, 99 Sampling, 121, 199–239 Sapp, Stephen, 187 available subjects, reliance on, 203–204 Scales, 169–170, 186–193 bias, 208–209 Bogardus social distance scale, 186–187 cluster sampling, 231–238, 355 defined, 171 confidence intervals, 219–221 Guttman scale, 190–193 confidence levels, 219–221 indexes compared, 170–173 in content analysis, 352–355 Likert scale, 188–189 elements, 211 semantic differential, 189–190 equal probability of selection method, 210–211 Srole scale, 143 estimated sampling error, 511 Thurstone scale, 187–188 ethics of, 238–239 “Scandal In Bohemia, A” (Conan Doyle), 53–54 example of, 230 Scarce, Rik, 71 history of, 199–203 Schachter, Stanley, 112 informants, 206–207 Schedule in research proposals, 125 Iran, 232 Schmitt, Frederika E., 59 judgmental sampling, 204–205 Schutz, Alfred, 323 modification of, 230–231 “Science as a Vocation” (Weber), 82 multistage cluster sampling, 231–238 “Science,” definition of, 2 nonprobability sampling, 203–207 Scientific method, 45–48 normal curve, 217 Search engines, 479 parameters, 212 Secondary analysis, 270, 304–307 periodicity, 225 Selective coding, 424 populations, 211, 221–223, 230 Selective observation, 9 probability proportionate to size sampling, 235–236 Self-administered questionnaires, 286–290 probability sampling, 200, 207–221, 238, 250–251 case study of, 289–290 probability theory, 212–213 computer self-administered questionnaire, 299 purposive sampling, 204–205 follow-up mailings, 288 quota frame, 206 mail distribution and return, 286–287 quota sampling, 202, 205–206 monitoring returns, 287–288 random-digit dialing, 212 response rates, 288–289 random selection, 200, 211–212 Selvin, Hanan, 213 reading social research, 475 Semantic differential, 189–190 representativeness, 210 Semiotics, 419–421 sampling distributions, 212–218 Sensate point of view, 370 sampling error, 216–218, 220, 232–233 Sense, Andrew J, 329 sampling frames, 202, 221–223, 230 Separate but equal, 42, 83 sampling interval, 224 September 11th, 201 sampling ratio, 224 Sexual research, 79–80, 84–85 sampling unit, 212 Shaffir, William, 319 simple random sampling, 224, 225 Shaw, George Bernard, 259, 260 snowball sampling, 205 Shea, Christopher, 77 statistics, 216 Sherif, Muzafer, 44 INDEX Signs, 419 Statistical Abstract of the United States, 366–368 Silence, 41 Statistics, 216 Silverman, David, 421, 458 Silverman, George, 339 Simmel, Georg, 37, 38 Simple random sampling, 224, 225 analysis of existing statistics. See Analysis of existing statistics. statistical analysis. See Quantitative data analysis. statistical significance in nomothetic explanations, 102 Singer, Eleanor, 297 Status, 420 Singhal, Arvind, 332 Stebbins, Robert, 319 Skinner, Steven, 289 Steel, Paul, 396–397 Skocpol, Theda, 374 Stein, Gertrude, 11 Smith, Andrew, 274 Stratification, 227, 234–235 Smith, Dorothy, 328 Stratified sampling, 227–230 Smith, Tom W., 305 Strauss, Anselm, 56, 324, 359, 416, 418, 423, 426 Snow, David, 321, 322 Street Corner Society (Whyte), 321 Snowball sampling, 205 Structural functionalism, 39–40 Social artifacts, 108–109 Structure of Scientific Revolutions, The (Kuhn), 35 Social indicators, 406–408 Strunk, William Jr., 486 Social interactions as units of analysis, 108 Study of Man, The (Linton), 40 Social intervention, 385 Study population, 211 Social Organization of Sexuality, The (Laumann), 85 Subgroup comparisons, 455–458 Social regularities, 13–15 Subjective knowledge, 41 Social science, 12–22 Sufficient cause, 102–103 aggregates, 15–16 Suicide (Durkheim), 142, 144, 362–364 attributes, 17 Summer, William Graham, 83 dialectics. See Dialectics. Survey research, 269–307 ethics. See Ethics. bias, 277–278 paradigms. See Paradigms. brevity of items, 276 purposes of social research, 21–22 clarity of items, 273 social regularities, 13–15 closed-ended questions, 272–273 theory, 13, 14 competency to answer, 274 variables, 16–21 double-barreled questions, 273–274, 275 “Social Structure and Anomie” (Merton), 142 ethics and, 307 Sociobiology, 111 interview surveys. See Interview surveys. Sociological diagnostics, 464–466 method comparisons, 302 Sociologie, 36 negativity of items, 276–277 Solomon four-group design, 258–259 open-ended questions, 272 Sorokin, Pitirim, A., 370 question asking guidelines, 271–278 Souls of Black Folk, The (DuBois), 42 questionnaires. See Questionnaires. SourceWatch, 492 reading social research, 476 Split-half method, 159 relevancy of questions, 274–276 Spock, Benjamin, 2 respondents, 270–271 Spohn, Cassie, 402–403 secondary analysis, 304–307 Spurious relationships, 100–101 social desirability of questions and answers, 277 Srole, Leo, 143–144 strengths and weaknesses of, 303–304 Srole scale, 143 technology, 299–301 Staab, Jennifer, 231 telephone surveys, 295–299 Standard deviation, 453 topics, 270–271 Stanley, Julian, 253, 256, 257, 258, 259 547 willingness to answer, 274 Stark, Rodney, 373 Symbolic interactionism, 37–38 Static-group comparison, 254, 255 Symbolic realism, 319 548 INDEX Systematic sample with a random start, 224, 229 Units of analysis, 104–111 aggregates, 104 Systematic sampling, 224–227, 229–230 in analysis of existing statistics, 365 in content analysis, 352–355 T ecological fallacy, 109–110 Tables elements compared, 211 constructing and reading bivariate tables, faulty reasoning, 109–111 462–463 groups, 106, 107 percentaging, 460–462 Takeuchi, David, 57, 59 individuals, 105–106, 107 Tandon, Rajesh, 395 organizations, 106–107 Taylor, Humphrey, 300 reductionism, 110–111 TDE, 299 social artifacts, 108–109 social interactions, 108 Tearoom Trade (Humphreys), 79–80 Telephone surveys, 295–299 Univariate analysis, 448–455 Terhanian, George, 300 central tendency, 450–453 Test-retest method, 158–159 continuous variables, 454 Theoretical notes, 427 defined, 448 Theoretical sampling, 325 detail versus manageability, 454–455 Theory, 13, 14, 45 discrete variables, 454 dispersion, 453–454 ethics and, 60 distributions, 449–450 importance of in “real world,” 59–60 research and, 58–59 Thomas, Edmund, 397 Unobtrusive Measures (Webb), 350 Unobtrusive research, 345–379 analysis of existing statistics. See Analysis of existing Thomas, W. I., 374 statistics. Three Mile Island, 263–264 comparative and historical research. See Comparative “Three strikes” laws, 400–401 and historical research. Through the Looking Glass (Carroll), 46 content analysis. See Content analysis. Thurstone scale, 187–188 ethics in, 378–379 Time-series designs, 392–393 multiple, 394–395 Tokugawa Religion (Bellah), 372 Unobtrusive Research (Webb), 350 URL, 485 Total Design Method, 289 Touchtone data entry (TDE), 299 V Trading Places, 260 Validity Tradition, 7–8 in analysis of existing statistics, 365 Traditional model of science, 45–48 of experiments, 254–259 Trend studies, 112–113 of measurements, 160–164 Triangulation, 123 of qualitative field research, 343–344 Trivariate relationships, 177–179 Value-free sociology, 82 Truman, Harry, 203 Variable-oriented analysis, 416 Tuckel, Peter, 289, 297 Variables, 16–21, 247 Tuskegee syphilis program, 409 continuous variables, 454 Twende na Wakati, 383–384, 399, 405 dependent variables, 19, 247 Typologies, 193–195 discrete variables, 454 independent variables, 19, 247 U operationalization, defining, 149 U.S. census, 85–86 Variance in index construction, 174 Understanding America (Perrow), 373–374 Veroff, Joseph, 115 Unidimensionality, 173 Virtual ethnography, 322 Uniform Crime Reports, 366 Voice Capture technology, 298 INDEX Voice recognition (VR), 299 Wilson, Edward O., 111 Voluntary participation, 67–68 Women’s Ways of Knowing (Belenky), 41 Woodhams, Carol, 331 W Working papers, 487 Walker, Jeffrey, 299 World Bank, 37 Wallace, Walter, 53 World Factbook, 480, 483 Ward, Lester, 59, 60 Writing social research, 471, 486–493 Warner, W. Lloyd, 326 aim of report, 487–488 Warner, William Lloyd, 144 analyses, guidelines for, 491–492 Warriner, G. Keith, 260 analysis and interpretation, 490 Weaver, Randy, 322 audience, 486 Webb, Eugene, 350 ethics, 493 Web-based experiments, 262 form of report, 486–487 Weber, Max, 82, 371–372, 377 “going public,” 492–493 Web research, 478–485 length of report, 486–487 Webster’s New World Dictionary, 67 literature reviews, 488 Weighting in sampling, 236–238 organization of report, 488–491 Weiss, Carol, 394 overview, 488 Weitzer, Ronald, 105 plagiarism, 489–490 Weitzman, Lenore, 108 purpose of report, 488 Wells, Ida B., 351 study design and execution, 490 Wharton, Amy, 180 summary and conclusions, 490–491 Wheel of Science, 53 When Prophecy Fails (Festinger/Reicker/Schachter), 112 Y Whitcomb, Michael, 301 Yammarino, Francis, 289 White, E. B., 486 Yinger, Milton, 252 White, William, 60 Whyte, William Foote, 321 Z Wieder, D. L., 323–324 Zerbib, Sandrine, 341, 432, 433–438 Wife battering prevention, 403–404 Zhang, Ray, 26 Wilson, Camilo, 300 Znaniecki, Florian, 374 549 PHOTO CREDITS Part and Chapter Openers Pages 1, 3: Bonnie Kamin/PhotoEdit 32: John-Claude LeJeune 64: Hazel Hanken/Stock Boston 92: Kathy Sloane/Photo Researchers, Inc. 94: Photographers Library LTD/eStock Photography 130: David M. Grossman/Photo Researchers, Inc. 168: Stock Solution/Index Stock Imagery 198: Bob Daemmrich/ PhotoEdit 242: Paul Conklin/PhotoEdit 244: Jeff Greenburg/PhotoEdit 268: Don Pitcher/Stock Boston 312: Paul Conklin/PhotoEdit 348: Dion Ogust/The Image Works 382: Wojnarowicz/The Image Works 412: David Falconer/ Folio, Inc. 414: Jim Whitmer/Stock Boston 442: David Falconer/Folio, Inc. 470: John Henley/CORBIS. In-text Photos Pages 4, 5, 33, 35, 65, 67, 82, 95, 97, 131, 135, 169, 173, 199, 207, 213, 245, 246, 269, 313, 318, 349, 383, 415, 443, 471, 473: Earl Babbie 444: Aaron Babbie. 550 Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2020
Categories |