Understanding health disparity causes is an important first
step toward developing policies or interventions to eliminate disparities, but their nature
makes identifying and addressing their causes challenging.
Potential causal factors are
often correlated, making it difficult to distinguish their effects.
These factors may exist at different organizational levels (e.g.,
individual, family, neighborhood),
each of which needs to be appropriately conceptualized and
measured. The processes that
generate health disparities may
include complex relationships
with feedback loops and dynamic
properties that traditional statistical models represent poorly.
Because of this complexity,
identifying disparities’ causes
and remedies requires integrating
findings from multiple methodologies. We highlight analytic
methods and designs, multilevel
approaches, complex systems
modeling techniques, and qualitative methods that should be
more broadly employed and
adapted to advance health disparities research and identify
approaches to mitigate them.
(Am J Public Health. 2019;109:
S28–S33. doi:10.2105/AJPH.
2018.304843)
Neal Jeffries, PhD, Alan M. Zaslavsky, PhD, Ana V. Diez Roux, MD, PhD, John W. Creswell, PhD, Richard
C. Palmer, DrPH, Steven E. Gregorich, PhD, James D. Reschovsky, PhD, Barry I. Graubard, PhD, Kelvin
Choi, PhD, Ruth M. Pfeiffer, PhD, Xinzhi Zhang, MD, PhD, and Nancy Breen, PhD
Understanding health disparity causes is critical to developing policies to eliminate
them. However, identifying
these causes is challenging for
several reasons: Causal factors are
frequently correlated or interact
with each other and may form
long causal chains that hinder the
effort to distinguish causal effects
from noncausal associations.
Causal mechanisms may operate
at different levels, which include
social structures, behaviors, and
genes, each of which entail different approaches to conceptualization and measurement. Causal
processes leading to disparities may
involve feedback loops and dependencies that result in dynamic
relations and emergent properties
that are not easily reducible to independent effects. Key to understanding complex causes is
selecting appropriate methodologies and using complementary
approaches.
These challenges engender
recommendations that researchers further develop and
expand the use of (1) study design
and analytical methods that
maximize the ability to draw
causal inferences from observational data, (2) modeling techniques that account for the
multilevel nature of health disparity causes, (3) complex systems
and simulation methods for
modeling dynamic relations, and
(4) qualitative and mixed methods
that allow a better understanding of relationships that cannot
be achieved using quantitative
methods alone. We highlight
methods supporting these
recommendations.
STUDY DESIGN AND
ANALYTICAL
APPROACHES
Causal effects may be defined
as the difference between potential
outcomes that would arise from
different treatments.1 But for a
particular participant at a particular
time, only the outcome associated
with the “assigned” treatment can
be observed; the outcome associated with the treatment that was
not provided is counterfactual. Research study design and analysis
are largely concerned with finding ways to compare observed
outcomes and appropriate
counterfactuals to make inferences regarding a causal effect.
Conceptual models are critical
to social science research, and in
recent decades formal graphical
tools have been adopted to guide
analyses and interpretation from
a causal inference perspective. In
particular, directed acyclic graphs2
are used to explicate the hypothesized causal relationships, determine
what causes are identifiable considering the information available (and
under what conditions), and
identify unintended consequences
of some analytical approaches (e.g.,
increasing rather than decreasing
bias as a result of statistical adjustment). By forcing investigators to
be explicit and share their underlying assumptions, these approaches also can enhance the
understanding of conflicting results
and facilitate discussion of plausible
causal pathways. Ideally, this process provides researchers a clearer
understanding of relevant relationships and suggests analytical
approaches to identify the causal
effects of interest. In our discussion
of analytic and study design approaches, directed acyclic graphs
can be used to focus on the assumptions and requirements for
causal inference.3
ABOUT THE AUTHORS
Neal Jeffries is with the National Heart, Lung, and Blood Institute, National Institutes of
Health (NIH), Bethesda, MD. Alan M. Zaslavsky is with the Department of Health
Care Policy, Harvard Medical School, Boston, MA. Ana V. Diez Roux is with the Dornsife
School of Public Health, Drexel University, Philadelphia, PA. John W. Creswell is
with the Department of Family Medicine, University of Michigan, Ann Arbor. Richard
C. Palmer, Kelvin Choi, Xinzhi Zhang, and Nancy Breen are with the National Institute
on Minority Health and Health Disparities, NIH, Bethesda. Steven E. Gregorich is with
the Department of Medicine, University of California, San Francisco. James D. Reschovsky is
with Mathematica Policy Research, Washington, DC. Barry I. Graubard and Ruth M.
Pfeiffer are with the National Cancer Institute, NIH, Bethesda. Richard C. Palmer and
Nancy Breen are also Guest Editors for this supplement issue.
Correspondence should be sent to Neal Jeffries, PhD, National Heart, Lung, and Blood
Institute, National Institutes of Health, Room 9194, MSC 7913, 6701 Rockledge Drive,
Bethesda, MD 20892 (e-mail: nealjeff@nhlbi.nih.gov). Reprints can be ordered at http://www.
ajph.org by clicking the “Reprints” link.
This article was accepted October 21, 2018.
doi: 10.2105/AJPH.2018.304843
S28 Analytic Essay Peer Reviewed Jeffries et al. AJPH Supplement 1, 2019, Vol 109, No. S1
METHODS AND MEASUREMENT SCIENCE
Experiments and
Observational Studies
When practical, randomized
experiments provide an ideal
setting for evaluating a causal
effect. When “treatments” (or
exposure to the hypothesized
causal factor) are applied under
the control of a random process,
the researcher can have the most
confidence that the treatment is
independent of other factors
that might affect outcomes and
bias the estimates of treatment
effects.
Experimental randomization
provides powerful evidence for
the internal validity of a study,
specifically, the causal interpretation of differences in outcomes
within the study sample. Experimental research on health disparities faces 2 major challenges:
it may be logistically or ethically
infeasible to “randomize” causal
factors of interest (e.g., exposure
to racism, neighborhood attributes like walkability), and there
is uncertainty about the generalizability of effects seen in the
experiment to other populations
and situations. Issues of generalizability become especially salient
when interactions between factors are important, as may often be
the case for determinants of population health, and therefore the
broader context(or constellation of
other cooccurring factors) of the
trial may influence its results. Also,
generalization may be further
limited4 because randomization
excludes individuals’ self-selection
to treatment options that might
work better for them than a randomly assigned treatment. Primarily for these feasibility and
generalizability reasons, observational studies have become the
mainstay of much research on
health disparity causes.
In observational studies, the
researcher does not control the
treatment or exposure. Without
randomization, the exposure of
interest (e.g., income) may be
correlated with other variables
(e.g., education) that contribute
to a disparity. This confounding
is the key challenge in using
observational studies to identify
health disparity causes. The
intercorrelated and interrelated
nature of many factors of interest
makes identifying the causal
pathway an especially vexing
problem in this field. The following section highlights some
of the analytical and design approaches used to improve the
utility of observational studies
in drawing causal inferences in
minority health and heath
disparities.
Addressing Confounding
of Observational Data
Regression analysis. Regression analysis is a primary tool
for analyzing observational
studies; therefore, correct application and interpretation of regression techniques are critical
to health disparities research.
Regression-based methods are
applied to observational data
to create “comparability”
(i.e., adjust for potential confounding covariates) across
treated and untreated groups to
improve causal inference regarding the disparity (defined as
the treatment’s or exposure’s effect on health). The validity of
estimates of health disparities attributable to membership in a
disadvantaged group, as in any
causal inference drawn from regression modeling, depends on
the assumption of a correctly
specified model in which all the
important covariates and confounders are included as independent variables in the
correct functional form. Determining these assumptions’
viability may be especially
problematic, considering the
many potentially relevant factors
and the complex relationships
among each that are common in
health disparity settings.
The Peters-Belson5,6 and related Oaxaca-Blinder7,8 methods
are regression extensions that are
well suited for assessing health
disparities by modeling counterfactuals.9 The Peters-Belson
method first regresses a health
outcome on individual-level
covariates using data from the
majority group and then uses the
coefficients from the fitted model
to estimate the expected values of
the outcome for minority group
individuals as if they were members of the majority group.
These counterfactuals applied to
group differences in the observed
health disparity are decomposed
into a part that is explained by the
covariates and a remaining part
that is not explained by the
covariates.10
For example, Rao et al.11 used
logistic regression to apply the
Peters-Belson method to assess
Black–White disparities in
screening for colorectal cancer.
First, they used a logistic regression model with only the
White race sample to estimate
coefficients for the covariates
(e.g., income, having insurance
coverage, having a usual source of
medical care) associated with the
rate of colorectal cancer screening and the difference between
Whites and Blacks. Then, they
used these regression coefficients
for Whites to predict rates of
colorectal screening for Blacks.
The difference between the observed mean (i.e., proportion
screened) for Whites and the
mean of the predicted values for
Blacks is the part that is explained
by the covariates in the model.
The remainder is the unexplained disparity. Because the
Peters-Belson method only fits
the regression to the majority
group, it is useful when the minority group sample is small.
This partitioning technique
also can be used to estimate the
potential reduction in a disparity
if an intervention is implemented
to modify the covariates between
the groups. The Oaxaca-Blinder
method is similar to the PetersBelson method but can be used
to further decompose the unexplained disparity.
Matching. Matching can be
used as an alternative or adjunct
to regression to improve comparability between treated and
untreated groups by pairing
treated cases with untreated cases
manifesting similar covariate
values. When applied correctly,
matching avoids 3 possible pitfalls
of simple regression adjustment:
extrapolation of regression predictions beyond the range of
observed data, manipulation of
regression models to obtain a
desired outcome, and bias arising
from a misspecified regression
model. Matching may be difficult
when there are many variables
under consideration.
Propensity score. An aid to this
process is the propensity score,
defined as the probability that an
observation will be in the treated
or exposed group because of its
covariates. If a suitable propensity
score model can be identified, the
resulting estimated scores can
be used in several ways to help
bolster a causal inference about a
health disparity.12 These include
stratifying on propensity score
values during analyses, weighting
each group’s data inversely to its
propensity score, and matching
control- and treatment-group
individuals by propensity scores.
Importantly, these methods can
promise balance only on observed covariates; the causal
claims arising from these methods
depend on the assumptions that
there are no unmeasured confounders and that the propensity
METHODS AND MEASUREMENT SCIENCE
Supplement 1, 2019, Vol 109, No. S1 AJPH Jeffries et al. Peer Reviewed Analytic Essay S29
model correctly specifies the
functional relationship between
covariates and propensity score.
These assumptions may be
problematic for health disparities
research, considering the complex range of social and biological
contributors.
Instrumental variable analysis.
Instrumental variable analysis
provides an alternative approach
to controlling for confounding.
An instrumental variable affects
the probability of receiving a
particular treatment or exposure
but has no plausible direct effect
on the outcome.13 For example,
studies of cancer survivors find
differences by income in quality
of life. Yet, these associations
cannot be interpreted as demonstrating causal effects of income on quality of life, because
quality of life also affects income.
A suitable instrumental variable
would exclude this possibility
of reverse causation.
Short and Mallonee14 constructed an instrumental variable
for income information on home
ownership, sources of unearned
income, marital status at diagnosis, and spousal characteristics. Because the instrument
represented resources acquired or
measured before the onset of
cancer, reverse causality could be
excluded as an alternative explanation for these effects. The
assumption that the instrumental
variable for income has no causal
effect on or association with the
quality of life outcome, except
through its effect on income, is
the “exclusion condition” in this
example. The exclusion condition is essential to instrumental
variable analysis, but it cannot be
proven empirically. Instead it
must be founded on previous
theory about the possible causal
mechanisms at work (in this case
the exclusion condition might be
questioned if one believes marital
status at diagnosis has a direct
effect on later quality of life that
is independent of income).
When a satisfactory instrument can be identified, the
analysis has the benefit of controlling for both measured and
unmeasured confounders. Because unmeasured confounders
are common in health disparities
research, instrumental variable
analysis can be important for
health disparities analysis. However, finding a suitable instrument that plausibly meets the
exclusion condition is challenging, especially if one is limited to
the variables available from secondary use of an existing data set.
Natural experiments. Natural
experiments can be useful in
assessing the impacts of policies
relevant to health disparities. In
natural experiments or quasiexperiments, the treatment is
not randomized15 but instead is
determined by some actor or
force in ways that approximate
randomization in that it is plausibly unrelated to potentially
confounding factors. Examples
include differential geographic
availability of health services or
phase-in of a policy such as a new
educational approach or health
insurance through Medicaid
expansion. The researcher identifies a situation in which a
treatment is applied and selects
an analysis method to extract
information relevant to assessing
the causal effects. In the interrupted time series design, the
change occurs at a particular time,
such as enactment of a new law
(e.g., banning housing discrimination, reducing the thresholds
for Medicaid coverage). The
pre- and postchange outcome
trends are compared, possibly in
comparison with simultaneous
effects on another group unaffected by the change. In the
regression discontinuity design,15
we defined exposure as falling on
1 side of a threshold of some
characteristic. For example, areas
become eligible for a program
when the percentage in poverty
exceeds a program cutoff. In both
designs, including control groups
can be used to strengthen causal
inference.
Behrman,16 for example,
used a discontinuity design to
evaluate whether a change in
national education policies to
increase primary school opportunities for women would lead
to reduced HIV transmission.
Natural experiments, like randomized studies, are subject to
concerns regarding generalizability to broader settings, as the
environment allowing comparisons of similar groups might
reflect unusual circumstances
that are not widely available but
may influence the measured
outcome. For instance, a cigarette tax passing with voter
support may reflect a populace
favoring reduced cigarette
consumption.
Marginal structural models.
Major challenges in studying
health disparities are the presence
of time-varying confounders and
the possibility of variables being
simultaneously confounders and
mediators. For example, neighborhood of residence may affect
income (through access to jobs)
and income may in turn affect
residential location, and both
income and neighborhood may
affect cardiovascular risk. Income
is therefore both a confounder
and a mediator for neighborhood
health effects. Marginal structural
models17 have been developed to
address biased results from standard regression techniques for
handling confounding with this
type of complication, improving
on some older methods. This
modeling approach uses inverse
proportional weighting to create
pseudopopulations in which an
exposure’s effect is not confounded with the covariates used
for adjustment. This approach
allows causal inference that reduces bias arising from timevarying confounders. Such confounders are common in health
disparities research that collects
longitudinal data reflecting
complex relationships among
variables.
Fixed-effects models. Causal
inference from observational data
in health disparities may be
greatly enhanced when at least
some individual-level confounders can be held constant.
This is possible in some longitudinal settings with data collected from individuals over time.
Although the use in economics is
longstanding, only recently have
econometric fixed-effects models
been adopted in public health
and epidemiology. In fixedeffects models,18 an indicator
variable for each individual (or
group if that is the unit of analysis)
stands in for all time-invariant
characteristics, observed or not,
thus estimating the effects of a
time-varying individual-level
treatment while conditioning on
all time-invariant person-level
characteristics.
Different specifications of
fixed-effects models are adapted
to analyses of various datacollection designs as well as various assumptions about the
structure of the intervention’s
time-varying effect. For example,
a difference-in-differences analysis can be used to estimate the
effect of a single intervention
applied at a single time point.
More general versions incorporating pre- and postinterventiontrends are commonly
applied to analyzing data collected under an interrupted time
series design, whereas change
versus change models identify
changes in trend at multiple
time points corresponding
to introduction of several
interventions.
METHODS AND MEASUREMENT SCIENCE
S30 Analytic Essay Peer Reviewed Jeffries et al. AJPH Supplement 1, 2019, Vol 109, No. S1
Fixed-effects models do have
limitations: they may be subject
to confounding by unmeasured
time-varying covariates, are not
well suited to the investigation of
causal processes with long-term
lags, and may be inefficient if little
within-person variation is observed. Mujahid et al. examined
fixed-effects models in a health
disparities context to assess
whether between-person differences in cardiovascular outcomes
(e.g., racial differences) persist
after controlling for higher-level
differences (e.g., neighborhood
factors).19
The selection of techniques is
not comprehensive but instead
emphasizes some of the more
common and useful approaches
to mitigating confounding. In a
broad review of US health disparities, Adler and Rehkopf20
highlighted a similar set of
methodological approaches to
reduce confounding. We, instead, focus on methods to address specific challenges in
disparities research and include
more recently developed
approaches.
Confounding may be the
most acknowledged problem for
observational studies, but a separate, related problem concerns
appropriate analysis for causes
that can arise from multiple levels
of analysis. Analyses that neglect
this multilevel aspect can also lead
to biased estimates and incorrect
inferences.
MULTILEVEL NATURE
OF HEALTH DISPARITY
CAUSES
Considering the complexity
of health disparities etiology,
factors driving health outcomes
may arise from different levels.
For example, when modeling
cancer mortality, differences by
socioeconomic status may be
related to the patient’s genetic
background, health history, residential and work environment,
and state-based health care policies. Multiple levels may need to
be incorporated into modeling.
The multilevel nature of potential health disparities causes might
occasion significant analytic
complexities.
Factors may affect individuals in the same way, for example, individuals in a common
neighborhood may share environmental exposures, health
care providers, and state policies. This sharing introduces
correlation between individual
outcomes that must be accommodated in statistical modeling.
Hierarchical, or multilevel,
models have been developed to
account for correlation in such
situations.21,22
Furthermore, failure to incorporate a relevant level in the
analysis can lead to incorrect inferences. For example, 2 related
questions could be posed: across
hospitals, does higher average
patient income result in lower
readmission rates and, within
hospitals, do patients with higher
incomes (relative to other patients in the same hospital) have
lower readmission rates? To address these questions, consider a
sample of hospitals and their patients’ income and readmission
information. A model that decomposes the effect of income
into between- and withinhospital effects would include 2
income variables: a hospital-level
variable describing average patient income and a patient-level
variable describing the difference
between the patient’s income
and the hospital’s mean value. In
a multilevel model, the effect of
hospital-level mean income on
30-day readmission represents a
between-hospital income effect
addressing the first question.
The effect of patient-level
income deviation scores represents a within-hospital effect,
reflecting how patients’ income
affected their probability of
30-day readmission relative to
their same hospital counterparts
with average incomes. When the
corresponding between- and
within-hospital effects are not
equivalent, a simpler model that
regressed the outcome onto the
observed patient-level income
variable would be misspecified;
the estimated income effect
would represent a weighted average ofthe between- and withinhospital effects of patient income
and would obfuscate the potentially complex relationships between patient income and the
modeled outcome.23
A related feature of multilevel
models important to health disparities research is their ability to
separate the factor effects at different levels and to model interactions across levels. For
example, the effect of patient
income on measures of diabetes
control might be different in
clinics that have nurse educators
who follow up with patients than
in those that do not. The moderating effect of clinic staffing on
income disparity is represented
by a cross-level interaction in the
multilevel model (in this case,
between the clinic and individual
patient level). Such effects may
elucidate the mechanisms of the
income effect or suggest interventions, possibly operating at
several levels, to reduce disparity.
The multilevel nature of
health disparity relationships
needs to be characterized by
more complex structures among
factors. In Subramanian et al.,24 a
multilevel model reexamination
of 1930 census data shows that
the interpretation of the relationship between an individual’s race and literacy is
improved by accounting for
state-level policy characteristics
and their cross-level interaction
with individual characteristics.
The richness of these models may
lead to better understanding but
will also require more complicated analyses. Hierarchical
modeling is a common analytic
approach to building multilevel
models, but other methods can
also incorporate this type of
complexity. Complex systems
and simulation-based analyses
can capture these relationships
and model feedback loops and
other complexities in relationships among health disparities
factors.
COMPLEX SYSTEMS
AND SIMULATION
MODELING
Complex systems approaches
can be used to provide insight
into how a system functions,
identify points for intervention,
explore specific hypotheses about
causation, and identify plausible
impacts and unintended consequences of an intervention
under varying conditions. These
methods are especially useful in
situations involving factors at
different levels, feedback, and
dependencies, all of which
characterize health disparity
questions.25 The complex systems perspective is general and
encompasses several analytic approaches, including agent-based
modeling, system dynamics
simulation, network analysis, and
microsimulation. The choice of
analytic approaches depends on
the research question.26
Agent-based models allow the
modeling of interactions and the
responses to a set of conditions,
considering the rules used to
define the agent’s behaviors
(these can be either probabilistic
or deterministic27) and can assess
METHODS AND MEASUREMENT SCIENCE
Supplement 1, 2019, Vol 109, No. S1 AJPH Jeffries et al. Peer Reviewed Analytic Essay S31
these interactions’ effects. For
example, Orr et al.28 used agentbased modeling to forecast the
effect of improving the quality
of neighborhood schools on
reducing racial disparities in
obesity-related dietary behaviors.
System dynamics simulations are
particularly well suited for understanding population-level
processes and flows. The Prevention Impacts Simulation
Model, a system dynamics
model, was used to examine the
potential influence of different
types of interventions for reducing cardiovascular risks.29
Network analysis investigates
how social ties between individuals, groups, or organizations contribute to health
disparities. Buchthal and Maddock30 employed network analysis to identify the gap in
communication and collaboration patterns of organizations that
provide nutrition education to a
low-income, ethnically diverse
population in Hawaii.
These models have great potential to improve the assessment
and identification of effective
interventions to reduce health
disparities. As computational capabilities grow, system approaches may lead to more
sophisticated modeling reflecting
realistic complexities, and simulation methods will illuminate
the relationships among important factors to address health
disparities.25,31 For all models,
however, underlying assumptions need to be carefully assessed
to ensure interpretable and
meaningful results. Using evidence and data to formulate the
dynamic models, set parameter
values, and validate model
functioning is also crucial.
Comparing models is thus an
important aspect of systems
analysis, as is done comprehensively in the Cancer Intervention
and Surveillance Modeling
Network.32
INCORPORATING
QUALITATIVE
APPROACHES
The quantitative methods
discussed are best employed in
the context of an existing conceptual model with hypothesized
relationships between outcomes
and possible causal factors. Qualitative research can be used to
identify plausible causal factors
and processes that are relevant to
health disparities, generate and
refine conceptual models and
hypotheses, and explain the relationships among factors documented in quantitative studies.
These analyses may be especially
valuable for uncovering important factors when applied to
populations for which little previous research exists. Qualitative
approaches to identifying health
disparity causes can serve as
stand-alone analyses or can augment, guide, or enhance quantitative methods. Building a
holistic picture using the descriptions study participants
provide to understand complex
social, economic, or organizational phenomenon is the common element that resonates in
all qualitative research.33
Qualitative research is primarily inductive and depends
on the purposeful selection of
participants. This perspective
distinguishes qualitative from
quantitative research.34,35 Ideally, qualitative research provides
a realistic interpretation of the
world from the participants’
perspectives. However, these
interpretations need to be validated (e.g., member checking,
intercoder agreement checks36).
Qualitative research findings focus on specific situations and
contexts and, consequently, have
limited generalizability. Using
qualitative approaches in health
disparities research remains limited and only recently has been
integrated with quantitative
work.
Quantitative and qualitative
methods, once seen as diametrically opposed, have
emerged as essential complementary tools in communitybased participatory research and
other types of health disparities
research.37,38 Mixed methods
research integrates quantitative
and qualitative methods to corroborate results, generate causal
hypotheses, elaborate on findings, or augment intervention
trials or program evaluations.39
Mixed methods may use qualitative and quantitative methods
simultaneously or sequentially.
In sequential designs, a component produces data or
theory that informs the next
component.34,40
Stewart et al.41 presented 2
exemplar mixed model studies
illustrating the use of qualitative
methods. The first examined
social exclusion and social isolation in low-income populations
sequentially. The authors used
qualitative interviews in the first
phase to guide item development for second phase survey
questions. The second study
examined family caregivers’
support of seniors with chronic
conditions. Qualitative interview data were collected during
and after the intervention to
explore processes, such as the
participants’ perceived impacts
and satisfaction with the intervention. The richness of data
obtained from a mixed methods
approach allows findings to be
corroborated and expanded.
Corroboration is important for
health disparities studies, especially studies of hard-to-reach
populations because of the
limited background literature
RECOMMENDATIONS FOR IMPROVING HEALTH DISPARITIES RESEARCH
Recommendation 1: Strengthen and promote analytic methods that maximize the ability to draw causal inferences from observational studies and enable a better
understanding of health disparity causes.
Recommendation 2: Incorporate and further develop models that reflect the multilevel nature of health disparity causes to provide richer and more accurate
characterizations of plausible causal pathways.
Recommendation 3: Expand the use of complex systems and simulation modeling to increase the ability to model intricate relationships between health disparities and
health determinants and to assess health disparities interventions.
Recommendation 4: Incorporate the further use of qualitative and mixed methods analysis so participant perspectives can illuminate plausible causal mechanisms and
provide better understanding of the impacts of policies and interventions.
METHODS AND MEASUREMENT SCIENCE
S32 Analytic Essay Peer Reviewed Jeffries et al. AJPH Supplement 1, 2019, Vol 109, No. S1
on these populations. More
approaches to integrate quantitative methods with qualitative approaches to identify
causes and validated findings
are needed.
CONCLUSIONS
Research of health disparities
causes is subject to several sources
of complexity. Disparities may
arise from multiple causes that are
susceptible to cofounding that
masks true effects. These causes
may arise from different levels,
thereby requiring more complex
analytic methods. Causal pathways may exhibit feedback loops
and interdependencies that are
poorly assessed using simple,
standard modeling approaches.
The box on page S32 provides
recommendations to address
these challenges.
Linking research on causes to
policy action (and vice versa) is
critical to making etiologic research policy relevant and improving etiologic research so
more effective policies and interventions can be identified.
To do so, it is vital to improve
available methods and the training of future generations of diverse researchers in multiple
methodologic approaches.
CONTRIBUTORS
All authors contributed to the conceptualization, discussion, writing, and reviewing
of this essay.
ACKNOWLEDGMENTS
This work was supported by the National
Cancer Institute, the National Heart,
Lung, and Blood Institute (NHLBI), the
National Institute on Minority Health
and Health Disparities (NIMHD), and
the National Institutes of Health (NIH).
This content resulted from a
NIMHD-led workshop to address the
methods and measurement science of
health disparities.
Note. The views expressed in this
essay are those of the authors and do not
necessarily represent the views of the
NHLBI, the NIH, or the US Department
of Health and Human Services.
CONFLICTS OF INTEREST
No conflicts of interest.
HUMAN PARTICIPANT
PROTECTION
No protocol approval was necessary
because no human participants were
involved in this essay.
REFERENCES
- Imbens GW, Rubin DB. Causal Inference in Statistics, Social, and Biomedical
Sciences. New York, NY: Cambridge
University Press; 2015. - Greenland S, Pearl J, Robins JM. Causal
diagrams for epidemiologic research.
Epidemiology. 1999;10(1):37–48. - Morgan SL, Winship C. Counterfactuals
and Causal Inference: Methods and Principles
for Social Research. New York, NY:
Cambridge University Press; 2007. - Barile JP, Smith AR. Our theories are
only as good as our methods. Global Journal
of Community Psychology Practice. 2016;
7(2):1–5. - Belson W. A technique for studying
the effects of a television broadcast. J Roy
Stat Soc C-App. 1956;5(3):195–202. - Peters CC. A method of matching
groups for experiment with no loss of
population. J Educ Res. 1941;34(8):
606–612. - Blinder AS. Wage discrimination:
reduced form and structural estimates.
J Hum Resour. 1973;8(4):436–455. - Oaxaca R. Male–female wage differentials in urban labor markets. Int Econ
Rev. 1973;14(3):693–709. - Pylypchuk Y, Selden TM. A discrete
choice decomposition analysis of racial
and ethnic differences in children’s health
insurance coverage. J Health Econ. 2007;
27(4):1109–1128. - Gastwirth JL, Greenhouse SW. Biostatistical concepts and methods in the
legal setting. Stat Med. 1995;14(15):
1641–1653. - Rao RS, Graubard BI, Breen N,
Gastwirth JL. Understanding the factors
underlying disparities in cancer screening
rates using the Peters-Belson approach:
results from the 1998 National Health
Interview Survey. Med Care. 2004;42(8):
789–800. - Rosenbaum PR, Rubin DB. The
central role of the propensity score in
observational studies for causal effects.
Biometrika. 1983;70(1):41–55. - Angrist JD, Imbens GW, Rubin DB.
Identification of causal effects using instrumental variables. J Am Stat Assoc. 1996;
91(434):444–455. - Short PF, Mallonee EL. Income disparities in the quality of life of cancer
survivors. Med Care. 2006;44(1):16–23. - Shadish WR, Cook TD, Campbell
DT. Experimental and Quasi-experimental
Designs for Generalized Causal Inference. 2nd
ed. Boston, MA: Houghton Mifflin; 2002. - Behrman JA. The effect of increased
primary schooling on adult women’s HIV
status in Malawi and Uganda: universal
primary education as a natural experiment.
Soc Sci Med. 2015;127:108–115. - Robins JM, Hernán MA, Brumback
B. Marginal structural models and causal
inference in epidemiology. Epidemiology.
2000;11(5):550–560. - Allison PD. Fixed Effects Regression
Models. Thousand Oaks, CA: SAGE;
2009. - Mujahid MS, Moore LV, Petito LC,
Kershaw KN, Watson K, Diez Roux AV.
Neighborhoods and racial/ethnic differences in ideal cardiovascular health (the
Multi-Ethnic Study of Atherosclerosis).
Health Place. 2017;44:61–69. - Adler NE, Rehkopf DH. U.S. disparities in health: descriptions, causes, and
mechanisms. Annu Rev Public Health.
2008;29:235–252. - Raudenbush SW, Bryk AS. Hierarchical
Linear Models: Applications and Data Analysis Methods. Thousand Oaks, CA: SAGE;
2002. - Diez-Roux AV. Multilevel analysis in
public health research. Annu Rev Public
Health. 2000;21:171–192. - Neuhaus JM. Assessing change with
longitudinal and clustered binary data.
Annu Rev Public Health. 2001;22:115–118. - Subramanian SV, Jones K, Kaddour A,
Krieger N. Revisiting Robinson: the
perils of individualistic and ecologic fallacy. Int J Epidemiol. 2009;38(2):342–360. - Diez Roux AV. Complex systems
thinking and current impasses in health
disparities research. Am J Public Health.
2011;101(9):1627–1634. - Luke DA, Stamatakis KA. Systems
science methods in public health: dynamics, networks, and agents. Annu Rev
Public Health. 2012;33:357–376. - Hammond RA. Appendix A: considerations and best practices in agentbased modeling to inform policy. In:
Wallace RB, Geller A, Ogawa VA, eds.
Assessing the Use of Agent-Based Models for
Tobacco Regulation. Washington, DC:
National Academies Press; 2015. - Orr MG, Galea S, Riddle M, Kaplan
GA. Reducing racial disparities in obesity:
simulating the effects of improved education and social network influence on
diet behavior. Ann Epidemiol. 2014;24(8):
563–569. - Homer J, Wile K, Yarnoff B, et al.
Using simulation to compare established
and emerging interventions to reduce
cardiovascular disease risk in the United
States. Prev Chronic Dis. 2014;11:E195. - Buchthal OV, Maddock JE. Mapping
the possibilities: using network analysis
to identify opportunities for building
nutrition partnerships within diverse low
income communities. J Nutr Educ Behav.
2015;47(4):300–307. - Maglio PP, Mabry PL. Agent-based
models and systems science approaches to
public health. Am J Prev Med. 2011;40(3):
392–394. - Clarke LD, Plevritis SK, Boer R,
Cronin KA, Feuer EJ. A comparative
review of CISNET breast models used to
analyze U.S. breast cancer incidence and
mortality trends. J Natl Cancer Inst Monogr.
2006;(36):96–105. - Creswell JW, Poth CN. Qualitative
Inquiry Research Design: Choosing Among
Five Approaches. Los Angeles, CA: SAGE;
2018. - Curry LA, Nembhard IM, Bradley
EH. Qualitative and mixed methods
provide unique contributions to outcomes
research. Circulation. 2009;119(10):1442–
1452. - Patton MQ. Qualitative Evaluation and
Research Methods. Los Angeles, CA: SAGE;
2002. - Creswell JW. 30 Essential Skills for the
Qualitative Researcher. Los Angeles, CA:
SAGE; 2016. - Creswell JW. A Concise Introduction to
Mixed Methods Research. Thousand Oaks,
CA: SAGE; 2015. - Pope C, Ziebland S, Mays N. Qualitative research in health care. Analysing
qualitative data. BMJ. 2000;320(7227):
114–116. - Johnson RB, Onwuegbuzie AJ.
Mixed methods research: a research paradigm whose time has come. Educ Res.
2004;33(7):14–26. - Curry LA, Nunez-Smith M. Mixed
Methods in Health Science Research. Los
Angeles, CA: SAGE; 2014. - Stewart M, Makwarimba E, Barnfather A, Letourneau N, Neufeld A.
Researching reducing health disparities:
mixed-methods approaches. Soc Sci Med.
2008;66(6):1406–1417.