Why does this blog exist? (in a shallow sense)

One of the main impetuses (aside: why can’t it be impeti?) for my efforts to set up this blog, was that when I googled “social epidemiology” or “social epidemiology blog“, I was severely underwhelmed by what I found. And don’t even get me started on the wikipedia page.  Even looking for “‘social determinants of health’ blog” brings up individual entries, not blogs devoted to the subject.

This was a shame, if not entirely a surprise.  A couple of years back, a group of students in the Department of Society, Human Development and Health at HSPH thought it would be a great idea to set up a group blog (à la scatterplot, among others).  This effort – under the moniker societyandhealth – was sadly rather shortlived.  I think this was partly due to it being started shortly before summer break and partly due to each of us having a slightly different idea what the site might do.

In large part, I think that the lack of material or coherence on the web reflects the breadth of the field and perhaps uncertainty regarding its epistemology.  I have heard it argued that public health in general, and social epidemiology must be a normative science and an activist discipline: if we find things that are causing ill-health, a failure to act on these through communication and policy change is little short of criminal.  Such arguments resonate with the efforts of Mayor Bloomberg.

On the other hand, there is still only a limited amount of actual evidence for health being associated with social conditions – certainly compared to more traditional risk factors such as behaviours and environmental exposures.  Thus there is a more positive science angle that says we need to run more, and better, studies to figure out which exposures cause which outcomes and through which mediating pathways.

It is also notable that social epidemiology aims to shift the discussion regarding causation in public health by changing what is a valid cause of health. (On which topic, if you haven’t heard of this book, you should get thee to a bookshop asap).  It is therefore an aggressive force for epistemological change.  And this is something I love about the field.  It does, however, often make it hard to nail down what is covered within its remit, since that keeps changing too.

All of which makes for a very interesting field, but not an easy one to follow online. Which could probably also be said of this blog post.  My point, however, is that in the absence of a blog devoted to social epidemiology and the social determinants of health, I thought that I might as well cast off from the shore and see where the currents take me.  I think we’re still within sight of the point of embarkation, but hopefully soon there’ll be new lands to discover, and maybe even pirates to fight.   But enough with the extended meataphor.  For now.

P.S.  If I’ve abused the term epistemology, my apologies.  In philosophy as is so much else, a little vocabulary is a very dangerous thing.

Advertisements

“Weighted to reflect my beliefs”

Alert: This post does not contain empirical evidence. Rant/size ratio: high. Also, I got a bit carried away, sorry for the length.

One of the most aggrevating phrases I see in research papers, and subsequent news reports, is the one that goes something like:

This study consisted of subjects, selcted using a methodology and weighted to be nationally representative

Now there’s nothing wrong with this sentence, and the aim of achieving results which can be generalized to the whole population is an admirable one. The problem, for me, is that it is one of the few parts of any methods section in a paper that doesn’t require justification, explanation or referencing.

Which is to say, I have no idea how you generated the weights you used here. When the sampling methodology is a stratified or non-proportional one, I can assume that you re-weighted it back based on the stratifying and selection criteria. But that’s rarely the major problem with survey data.

Rather I am likley to be concerned about differential non-response, i.e. who didn’t answer the questionnaire. If weights are applied here, they have to be based on some characteristic of the individuals. To give an example: I note that I selected 3,000 people truly at random to survey about their fruit eating preferences, and only 2,000 of them accepted. I know that ~51% of respondents should have been women, but in fact 55% were. I can adjust for this by weighting each woman’s response a bit less, each man a bit more, to figure out how many people in the USA love oranges.

And here’s where people’s internal biases come in. Sex and age are the relatively obvious things to look at and adjust for. And probably location too – maybe more urbanites respond because you can find them easily, or more rural inhabitants because they’re always home at 8pm. After that, it all comes down to your point of view, and crucially which factors you think affect fruit preferences. I mean, if men and women on average have similar preferences, the hassle of weighting your sample isn’t really worth it.

Do you think that social factors matter, if so, which ones? Race/ethnicity might be do-able if you have census data on the proportion of people of each group living in an area. But SES isn’t routinely on the census, so if you think education, income or wealth matter, that’s going to be tough. Or maybe you think that genes are important, in which case, good luck. In the end, it all comes down to the perspective you have on the issue – something pointed out forcefully to me in the class that this book is based on.

And despite the foregoing rant, I don’t have a problem with weighting. But I do have a problem with us not being told what the weights are based on, in a nice simple manner. Preferably not buried in a refernce of a reference somewhere. And if reviewers insisted on it, this could be an easy change in editorial practice. FTW.

(Oh, and don’t even get me started on biases based on inaccurate responses and the fun of social desirability bias.)