May 25, 2011
This entry is going to contain a lot of stuff that’s been swirling around in my mind for several weeks, and I’m not sure it all quite fits together, but I want to put some thoughts out there.
The first trigger was that I tangentially got involved in one of those discussions about whether science is better than religion. I normally don’t bother with that argument because it’s boring and frequently stupid, and also because I don’t think it’s a meaningful comparison. Science is not only no good, but completely irrelevant, for organizing a regular rota of visitors to check up on an old lady with Alzheimer’s who is estranged from her daughter. Religion is not only no good, but completely irrelevant, for understanding how prions in the old lady’s brain aggregated to cause her to lose her memory and functionality. (I have no intention of asserting that atheists never visit lonely senile people, just that they don’t use science to do so, because they are not idiots.)
But anyway, I joined in with this discussion because Paul is intelligent and interesting, and there was an issue of terminology I was curious about. The discussion led to Paul asserting (relevantly):
I think it is fair to say that the established results of the physical and biological sciences are less likely to be overturned than those of the social sciences. Evolution is a fact, current theories of anthropology will be outdated in a few decades.
Woah! That really, really brought me up short. I mean, it’s trivially not true, but even if it were it wouldn’t be a good thing! The whole point of why science is “better” than religion as a way of understanding how the world works is that scientific theories and models get changed when someone finds new data that contradicts the old view. This is a really good example of the way that selling science as an alternative to religion does a massive disservice to science (I care surprisingly little about vocal atheists misrepresenting religion): it leads to people, intelligent people I respect, trying to treat science as a source of eternal verities. I also absolutely disagree that physical science is inherently better than social science; it just isn’t, but trying to cram science into the niche where religion or Humanism or other philosophical systems belong can really easily lead to that sort of misguided hierarchy between branches of science.
The thing is, “believing in” science in this way doesn’t just offend me as a scientist; it kills people. Let me talk about a lecture I attended recently. The talk was given by the DUETs people, who are working to put conventional, evidence-based medicine on an even more scientific basis. But they are not doing this by claiming that good science should still be true decades and centuries after findings are reported. Quite the opposite! They are claiming that good science, and good evidence-based medicine, should be flexible in how it responds to new evidence, and established views should be constantly challenged. This isn’t just to make people feel better intellectually, it’s a really critical aspect of patient safety.
Example 1: for many decades in the second half of the 20th century, medical wisdom was that babies should be encouraged to sleep on their fronts. This advice was pretty universal, and even made it to Dr Spock’s famous book about childcare. It was based on the best evidence available at the time, but by the 70s there was an increasing body of evidence that sleeping prone is a significant risk factor for cot death. However, this evidence took a very long time (decades) to percolate into mainstream medical advice, because doctors and even the medical research community were reluctant to challenge the established scientific fact. They were especially reluctant to rely on data from soft sciences and observation of large human populations, in order to overturn data based on the “more reliable” physical experiments that led to the earlier bad advice. Dr Spock was wrong, not because he was a bad scientist (neither morally bad nor incompetent), but because cot death wasn’t really on the radar at the time he was writing. The data he relied on measured physical parameters of how well individual babies did, and was very likely correct that prone sleeping reproducibly improved those parameters in the short term. It was still wrong, and following Spock just because he had the authority accorded to a successful scientist still led to preventable deaths.
Example 2: some decades ago, there was some robust, reproducible, statistically valid scientific research showing that giving caffeine to premature babies helped to reduce the frequency of a condition called apnoea where the infant briefly ceases breathing. However, this research was often not applied clinically because there wasn’t any real evidence to show that reducing apnoea occurrence was particularly important. Nobody was being a bad scientist, nobody was following superstition or religious beliefs at the expense of evidence, there wasn’t even a big problem with doctors being unaware of the state of the art of research. It’s a perfectly medically valid decision that you don’t want to give a powerful drug with unknown long-term effects to premature babies who are extremely vulnerable anyway. It’s a perfectly valid ethical decision that you don’t want to do double blind randomized controlled trials on premature babies, with the very real possibility of harming them. Again, it took population studies and extrapolations from soft science observations to demonstrate that the frequency of apnoea is correlated with long-term risk of cerebral palsy and reduced life-expectancy. That’s a lot of avoidable disability and death because only one sort of clinical trial counts as properly scientific.
Example 3: some decades ago, there was some robust, statistically valid, properly designed and controlled research showing that steroids can be helpful in patients with severe brain injury. So doctors very sensibly started treating brain-injured patients with steroids. And scientists very sensibly did what scientists do, and repeated and extended the original experiments over the course of the intervening decades. They didn’t just assume that the original research must be “true” because it was “scientific”. They didn’t prefer to work on more glamorous, more prestigious new stuff at the expense of low-status confirmatory work. The effect size and statistical significance tended to decline with subsequent studies. This doesn’t mean that the original research was wrong, or that the original scientists were biased, incompetent or lying, it’s just an artefact of the way that scientific culture works. If you’re going to publish something novel, you have to have a pretty watertight case, with strong statistical significance and a relatively big effect, and that’s as it should be. But if you’re just confirming something that is already known, then rather less dramatic and conclusive results are acceptable because they support the established fact. And of course, we all know but can easily forget that 1 experiment in 100 will show that something is true at the 99% significance level purely by chance and sampling error!
After many decades, a consensus started to emerge that the effect of steroids in brain injured patients was small and not terribly reproducible. Not false, just marginal. Meanwhile, treating people with high doses of powerful steroids has known side-effects. The medical community started to suspect that the definite, quite serious harm caused by steroids was greater than the small, poorly reproducible benefits. But there wasn’t enough evidence to stop treating brain injured patients with drugs that might save at least some people’s lives, until there was a huge, expensive publicly funded trial involving 10,000 brain injury patients across the EU which definitively proved that steroids do more long-term harm than good in this situation. So, ok, you might well say that this is a happy ending, this is medical research and evidence-based medicine working exactly as they should. But you have to take into account that even an optimal scenario means several decades of people receiving treatments which are actually harmful on balance, and which undoubtedly caused unnecessary deaths and suffering during this time period.
What are the implications for “rationalist” rhetoric? I think the most important is that scientific research, and particularly opinions couched in scientific-sounding language which include numbers, technical jargon and statistics, should be treated with at least a comparable level of skepticism to “woo” and alternative medicine. Lay people can’t expect to directly evaluate every individual piece of research they read about; indeed scientists can’t do that either, because most of it is outside their field and they have to spend at least some of their time studying new questions rather than confirming, validating and challenging old conclusions. But just accepting something as fact because it’s “scientific” is not the way to deal with this!
Just accepting the authority of someone because they have scientific qualifications leads to things like believing Wakefield about MMR because he did experiments and used statistics and medical terms. It leads to believing a popular book based on extremely dubious research because the authors have some academic credentials. And because neuroscience is a “real” science, they have more authority to talk about anthropology and sexual psychology than, you know, actual anthropologists and sexuality researchers because human sciences don’t count. It leads to giving racist propaganda the benefit of the doubt, because it uses statistics and hard sciencey jargon. Yes, it is a basic principle of science that one should accept unpalatable results if they are supported by data from well-designed and well-executed experiments. But all those people who piously recite this principle in response to badly-designed, biased and thoroughly debunked “experiments” “proving” that white people are inherently superior to other ethnic groups are strangely unwilling to give the same benefit of the doubt to the vast body of good research indicating that, you know, racism actually harms people. True, you can’t weigh and measure those harms, you can’t do double-blind experiments, but that doesn’t mean that social science is just a matter of what’s politically fashionable just now.
And that brings me on to my second point: if you believe that science is the best way of looking at the world, you should also accept that social science is the best way of studying human societies! That’s especially the case if you (or the journalists you rely on for your information) can’t tell the difference between actual physical / natural science and people using vaguely sciencey technobabble, but even good physics is relatively unhelpful for looking at social and cultural phenomena.
And yes, that goes for medicine too; there is lots of really vital medical information that just isn’t going to be found by doing randomized controlled trials and measuring the physical outcomes and applying statistics. Partly because a lot of randomized controlled trials that would be informative are also unethical. And partly because the information that can be measured physically isn’t always the most important; “how fast do babies put on weight?” can be measured easily, but a more important research question is “how likely are babies to die for no discernible reason?”
Drug trials are (relatively) easy to carry out in the time-honoured “hard” science way; you give the drug to half the patients and a placebo to the other half, and you measure objective parameters about how well the two groups do. I’m in no way arguing against doing this kind of experiment – hell, I spend most of my working life doing that myself – but it doesn’t mean that drugs are the best possible treatment for all possible conditions! For example most patients with joint pain would prefer physiotherapy and exercise rather than strong painkillers (and by the way, the reason I know this is because social scientists did serious research into the issue, not because some arrogant biologist assumed that his credentials totally qualified him to throw together an internet survey.) There is some evidence that the former has more benefits and fewer side-effects for a greater proportion of patients than the latter. But it’s rather harder to do a double-blind trial of physiotherapy, and you can’t use pure bioscience to answer questions like “how well do patients on this regime integrate into their communities and lead normal lives?” which may be as important as “what is the level of pain-related chemicals in the bloodstream of patients taking this drug versus a placebo?”
And thirdly, I suppose, don’t put too much faith in the scientific process. In the best possible circumstances it is slow and inefficient and people get harmed while science is sorting out the answer to difficult questions. When we’re talking about medicine, individual variation within the population is inevitable, and however good the evidence is for a particular treatment, that best treatment will do nothing for or actively harm a proportion of patients. And to be honest, the best possible circumstances don’t always apply; it’s hopelessly naive to believe that all science is pure and unbiased and free of the influence of culture and political and financial considerations! Criticize superstition and woo and political bias, of course, but don’t couch your criticisms in terms of assuming that the scientific mainstream is always right. That’s bad rhetoric and it’s atrociously bad science.
May 2, 2011
It’s a bank holiday which looks set to break with tradition and provide some actual sun. It’s unpatriotic to talk about any serious topic on a day like this, but then again the US army inconsiderately chose a day when there’s not supposed to be any news to accomplish its decade-old goal of killing Osama Bin Laden.
My religious and personal views forbid me to rejoice in another person’s death. I suppose I am mildly pleased about Bin Laden because killing him slightly reduces the political inevitability of endless war. It may even slightly decrease the number of civilians who have to die because they had the misfortune to be born in Muslim-majority countries with more or less tenuous links to Al Qaida. It certainly won’t bring back the hundreds of thousands already killed, or even the three thousand Americans killed in the terrorist attack which formed the excuse for the last ten years of violence. Oops, it’s unpatriotic to have any qualms about the number of human lives considered acceptable “collateral damage” in the almighty quest to take revenge on Bin Laden.
Perhaps I should talk about the royal wedding instead. I suppose I’m mildly pleased that HRH has found a woman to marry who seems pleasant enough. A woman of his own choice, who has known him well for ten years, so perhaps we won’t see repeats of the mistakes his father’s generation made. Good for them. It’s unpatriotic to express doubts about the cost of such a huge, ostentatious wedding, though. And apparently it’s illegal to get together with a group of people and express unpatriotic opinions in public. Pre-emptive arrests of protesters who “might” breach the peace. Who am I to spoil the nation’s bank holiday fun by expressing negative thoughts about that?
So I’m not feeling the expected warm fuzzy thoughts about this weekend’s national knees-up and military success. In fact, my main reaction is to cement my intention to vote Yes to AV on Thursday. Not because I’m terribly proud of my cleverness in being able to follow sophisticated mathematical arguments about why AV is better than FPTP. Rather, because I want to cast a vote, however symbolic, for a party which is not willing to throw this country into the USA’s wars. I want to vote against our wealth being pawned, against the loss of lives of young people who don’t have any better alternatives than to join the army, and particularly against the destruction of several countries and the barely regretted deaths of more than half a million of their civilians, just so George W Bush can prove that he’s as much of a man as his daddy and Barack Obama can prove he’s as much of a man as the white guy. It’s unpatriotic of me to value foreign leaders’ machismo lower than the life of a child, but I want to vote my unpatriotism.
I want to cast a vote, however symbolic, for a party which will stand up against using this endless war as an excuse for appalling violations of civil liberties. Against arresting people for gathering together to express politically inconvenient opinions, against heavy-handed, violent and occasionally lethal policing methods, against imprisonment without trial and ministerial intervention in trials when they eventually do occur, against serious government intrusions on privacy, against cooperating with regimes which torture political prisoners and prisoners of war. It’s unpatriotic of me to want to safeguard political and personal freedom even when we are slightly more threatened by brown-skinned terrorists than white-skinned terrorists, but I want to vote my unpatriotism.
And I want to be able to vote without having to worry about helping to hand my city over to the racist BNP and their allies who are equally racist but too cowardly to admit their connections with a known racist party. The mainstream political parties are all too willing to throw my city to the wolves. It’s poor, it’s a historically safe Labour seat, it’s too far away from London and too unimportant to business issues for anyone to care. But it’s my door those wolves are slavering at, and AV will give me a slightly pointier stick to keep them from devouring me and mine. That’s what’s on my mind this sunny bank holiday Monday that happens to be, in the Jewish calendar, the day set aside for remembering the Holocaust.
Anyway. Enjoy the rest of the bank holiday. I am stuck inside with a pile of marking, but that’s a very minor rant compared to the one I ended up composing.
February 22, 2011
I’ve started reading Neal Stephenson’s Anathem. I’m about 200 pages in and so far nothing much has happened, though it’s a fairly pleasant sort of nothing. But there’s something about it which I’d characterize as self-indulgent, and it’s reminding me of the tenor of some long-running internet discussions about social justice related stuff.
The common thread I’ve noticed among some people who identify with geek subcultures is that they think of themselves as totally free of prejudice, and also they experienced social exclusion as kids / teenagers (often to quite a severe extent), and therefore understand what it’s like to part of an oppressed minority. Both these assumptions are partly true, but taking them as absolutely axiomatic in all circumstances leads to a lot of frustration.
Being free of prejudice seems to be partly to do with identifying as being very (or even completely) rational and objective. There’s no rational reason why women should be inferior to men, people with darker skin should be inferior to people with lighter skin and so on, so your typical geek rejects these irrational prejudices. There’s an ideal, and one that I have a lot of time for, of being meritocratic, and judging people only on their intelligence rather than superficial aspects of their appearance.
The problem is that things like culture aren’t seen as objective facts, and indeed discrimination itself is assumed not to exist because it isn’t rational. This means that geeks can be entirely accepting of people who differ in superficial characteristics in theory, but in practice, if the superficial characteristics have tangible practical consequences, this kind of geek gets into a panic because that makes it not superficial any more, whereas the theory has already dismissed the differences as superficial. For example, if it turns out that women are a rather less likely than men to find rape jokes funny, or object to being constantly subjected to images of hypersexualized “babes”, then there must be something wrong with the women. No rational person (who, like me, was unaffected by them) would object to these things, and women are just like me, therefore they must be totally irrational in objecting!
The other problem with this attitude is that intelligence itself is a mixture of two things. One is in fact a superficial characteristic just like skin colour or height or whatever; intelligent people aren’t inherently morally superior to people of low intelligence. The second is that there are behaviours that are often confused with intelligence, but are more reflections of social class than anything else. Things like being educated and knowledgeable, especially about areas that are considered prestigious (knowing a lot about sport or fashion isn’t prestigious, knowing a lot about history or physics is). Things like being skilled in logical argument / rhetoric (at least as much a matter of training as innate intelligence). This is particularly noticeable when the topic is of purely intellectual interest to some people in the debate, but of personal, emotional impact to others; it’s easy in this situation for geeks to assume that the second group are less rational or even less intelligent.
Of course, the holy grail of geekdom, being competent with computers and the internet, is only accessible to people who have enough money to afford computers and broadband subscriptions, and enough leisure time (or sufficiently indulgent bosses) to be able to spend many hours a week online. Now, it’s true that these things are fairly, though not universally, accessible now, but people who have only been able to spend lots of time with computers and the internet for a few years rate as less intelligent, and therefore less worthy, than geeks who have been part of that culture for decades, and that’s going back to a time when you had to have a lot of advantages in life to be online regularly.
The result of these assumptions about intelligence is that geeks often find themselves most comfortable surrounded by people from very similar backgrounds. It’s still admirable, but not all that difficult, to respect diversity when it’s largely variation between middle to upper-middle class, anglophone, educated, straight, white, not too severely disabled males. Of course there are geeks who don’t completely fit that picture, and it’s definitely a good thing that these people are welcome in geek circles, but the point is that most of them are people who can pretty easily act as if they did fit the standard geek profile. I very much count myself in that category; although I’m Jewish, the ways I’m Jewish mean that my lifestyle differs very little from that of a secular post-Christian, and I’ve experienced very little serious antisemitism, and my appearance doesn’t really mark me as non-white. Although I’m bi, my presentation is such that I’m assumed to be straight and conventionally gendered. Indeed, although I’m female, many of my interests, my upbringing and my personality are those typically considered masculine. In fact I have so much in common with straight WASP male geeks that I am planning to marry one of their number!
The trouble is that people aren’t always willing or even able to pretend that the things that make them different from the standard don’t exist. This causes a surprising amount of friction. I think it’s partly because any mention of difference can be read as accusing geeks of being prejudiced, which they’re just axiomatically not. Another issue is that people may well not want to spend time, either online or in person, with people who treat them badly. The decision to avoid someone who makes you feel physically / sexually unsafe, or who constantly hurts you with racist micro-aggressions, is confused with shunning or ostracizing, and ostracizing is evil. Geeks who understand far too well how painful it is to be excluded from a social group, but don’t have any direct personal experience of how painful it is to be subjected to misogyny, racism etc, may well end up creating an environment that is far more welcoming to bullies than their victims, even if they themselves are genuinely not sexist or racist or otherwise prejudiced. Part of it is putting too high a value on being “objective”; there’s no merit, and much harm, in trying to have a neutral, balanced debate about whether certain groups of people are really human.
The other side of it is the belief that being bullied as a kid means you understand systematic oppression. It’s almost always a mistake to compare one kind of prejudice and exclusion with another; the impulse to build on your own experiences to generate empathy is admirable, but it can easily be taken too far. Beyond that, though, there is a difference in kind, not just in degree, between bullied because you like D&D better than football, and being subjected to racism. One of the things that’s bugging me about Anathem is that there is a group of people, the Ita, who are portrayed as being somewhere between Jews in pre-modern society, and highly excluded nerds. And some characters who are clearly supposed to be analogous to autistic / Asperger’s spectrum people in this world. Between that and the whole setting where a certain style of rationalism and logical argument is literally elevated to the status of a religion, I’m feeling a little impatient with the book.
When I started thinking about this sense of irritation, I was reminded of a whole bunch of things which are annoying in similar ways: the absolutely painful, awful conversations that happen when Making Light tries to discuss racism or religion (even though in fact it’s a pretty diverse community in terms of the declared identities of regular commenters). The stupid argument between the Overcoming Bias / Less Wrong crowd and some of the LJ social justice people about how racism and sexism are totally unimportant because they’re not cognitive biases or logical fallacies. Some of the discussions around Among Others (not the book itself, just some of the smugness of its readers who seem to be using it to justify their sense of superiority over the mundanes). A lot of tiresome reductionist arguments about how there’s no such thing as sexism because women on average have very slightly different brain structures from men, and obviously socially constructed gender is irrelevant because it’s not “objective” like physical measurements of the brain are. Some of the annoying bits of New Atheism.
I think what I’m saying is that sometimes admirable working principles can lead to negative practical consequences. I hope that if I write this down it will help me to appreciate all the positive things about geek culture, without falling into the trap of feeling superior to non-geeks or thinking I am knowledgeable about stuff I’m really ignorant of! Or perhaps I’ll just annoy everybody, I’m not sure.
December 15, 2010
I’ve been following the protests over the tuition fees issue, but not really participating. I’m not a protesting on the streets sort of person, and my institution seems to be relatively apolitical. Certainly the medical students can’t really think of jeopardizing their careers through unauthorized absences and potentially getting into trouble with the police. Regardless of the rights and wrongs of the students’ cause, police behaviour has unquestionably been deplorable. I’d have thought that the one thing a Liberal-Conservative coalition could agree on was that people have the right to express their opinions through demonstrations and protests. Apparently, though, we’re going to get all the disadvantages of a right-leaning government but none of the benefits.
I agree with the analysis that some people in my circle have been discussing, that the proposed situation for paying university tuition fees isn’t actually much worse financially than what we currently have. Indeed, it’s essentially a graduate tax being called by a different name for reasons of spin that I don’t totally understand. And raising the threshold at which graduates must start repaying loans is very likely a good thing.
That’s not to say I think the protests are groundless, though. For me the problem here is the principle issue: this is the first step in a move from public funding of university education and research, to a consumerist model where students pay for their own education, and research is supposed to muddle through somehow. I had the same problem when I did march in 1997: true, £1000 a year is a small sum compared to the clear benefits of university education. But at that time we were promised that the fees would never rise beyond inflation, and it took no time at all for that £1000 a year to become £3000 and now, less than 15 years later, it’s looking very much like £9000. Once you’ve established the idea that universities bill students directly for their education, you’ve created a situation which I am pretty convinced will lead to a university degree becoming the entrance fee to an exclusive plutocratic club. If £9000 is accepted, well, £12,000 isn’t a big increase. In another decade we’ll be looking at people borrowing more than their lifetime earnings. For that reason of principle, my heart is very much with the students (and many of my friends) out on the streets confronting police violence, even though not everybody involved has a clear head about the numbers right now.
The other issue where I’m strongly on the side of the protestors about is the withdrawal of EMA payments. There is absolutely no point making noble-sounding declarations about pushing universities to do outreach to students from poor backgrounds, if those students can’t afford to stay in school after 16 to do A Levels. I think a large part of the problem here is that the politicians, and the chattering classes as a whole, see £30 a week as pocket money. For EMA recipients, though, it’s the difference between possible and impossible. I’m horrified to see the government bribing married couples with £5 a week, even though most of them are adults who have one or more full incomes, have had a chance to become financially established and so on, when at the same time claiming that the financial situation is so dire that we can’t afford to support the poorest teenagers to the tune of £30 a week so that they can complete their secondary education and gain access to tertiary education.
I’m concerned for myself, because I have pretty much planned my life on the basis that there would be public funding for higher education, including research. I didn’t imagine that this funding would be generous or reliable, but I imagined it would exist! I don’t know how I would feel about working for a university that was a profit-making institution, selling certificates of middle-class status to act as entry tickets to the professions. Because if we start treating education as a marketable commodity, it won’t take long before we’re selling qualifications, not education. But hey, somehow, somewhere I’ll find someone to pay me to teach, whether it’s Jewish communities, primary schools or some kind of alternative adult ed track for those who can’t afford gilt-edged degrees.
So I’m much more concerned for the future of the country as a whole. At the moment I’m incredibly pessimistic, I foresee a social structure where only those with inherited wealth have a hope of a decent job, political influence, home ownership, financial stability etc. I’m not saying this is something that has suddenly happened, but I am saying that the government’s approach to withdrawing from funding HE is really consolidating this stratification. And I’m really distressed to see that anyone who has a problem with this future is in danger of being treated like a criminal at best, and actually suffering serious assault by police at worst. Apart from being unjust, this kind of society is incredibly unstable, and ultimately not at all beneficial even for those at the top of the heap.
Many people are disappointed with the Lib Dems for reneging on their promise to oppose tuition fees. Me, I basically expected that of the Lib Dems; I’ve seen what they were like in coalition in Scotland, where they had very little influence on their Labour coalition partners, and were far more interested in staying in power than in acting in their constituents’ interests. At election time, I hoped that the Lib Dems would be principled enough to refuse a coalition with Labour, which would have led to them reneging on their promises to oppose the Iraq war and support civil liberties. So, I got what I was hoping for in that sense, but I’m bitterly disappointed with the Conservatives, because they haven’t lived up to their promise of restoring individual freedoms. Plus I expected their educational policy to be about reducing the numbers of people going to university to the point where we could fund HE properly. I would rather see the brightest 10% of the country attending university than the richest 50%, alongside decent educational alternatives for people who want to learn practical and career-focused skills rather than pure academic subjects. I can’t criticize people who naively thought that the Lib Dems would uphold their principles in coalition, because I was equally naive in thinking that Cameron’s Conservatives would come up with a fair educational policy and rein in the worst injustices of the Labour term.
On a related matter, I’m a bit peeved at people uncritically repeating and re-tweeting that stupid article about Oxford’s admissions policy. Some guy cherry-picked statistics to create some eye-catching headlines suggesting that Oxford is reluctant to accept Black candidates, and made a big fuss about how much effort it was to find out the detailed breakdown of the data via Freedom of Information requests, when in fact most of the ethnicity data is publicly available on university websites, and he just wanted something more fine-grained. Besides which, separating out different ethnic groups who all happen to have black skin is a valid exercise; clearly actual Africans, African-Americans, and people who live in Britain but ancestrally hail from Africa recently, or the Caribbean a generation ago, are different groups of people with different experiences. Conflating specific data about Black British people of Afro-Caribbean origin with data about Black applicants in general is bordering on deceitful.
I’m not at all claiming that Oxford totally doesn’t have a problem with racism! There may well be racism. But making a big fuss about statistical noise fluctuations in tiny numbers of applicants isn’t at all the way to address this. Part of the problem, of course, is the numbers of students from particular ethnic groups who get the kind of school education that makes applying to Oxbridge feasible. There is very likely racism involved in that situation, but it’s not the fault of any university or college. But even if you’re trying to deal with actual racism on the part of Oxbridge colleges, this approach is IMO counterproductive. Repeating alarmist articles all over the place simply discourages ethnically disadvantaged students from applying in the first place. It’s like stereotype threat, only more extreme, and I think it’s highly irresponsible to spread that kind of misinformation.
I’m reminded of a case when I was at college: there was a whole big fuss about some kid who was rejected from Magdalen college even though she had four As at A Level, and her headmaster went to the press claiming that she had been discriminated against because she attended a state school. He ignored the fact that all the candidates for medicine at Magdalen had straight As at A Level, not to mention that the girl hadn’t made up her mind whether she wanted to read medicine or biochemistry. All this achieved was a marked dip in applications from state school pupils the following year; so much for all those righteous crusaders up in arms about Oxford’s biased admissions policy! Innuendo sticks; people remember the shock horror story of bias, not the careful debunkings that follow. Simply repeating this kind of stuff for the pleasure of outrage does far more harm than good.
I’ve probably offended everyone by now. Oh well, that’s my political rant for the week.
November 20, 2010
I just bought my 1000th song in mp3 format (Vienna Teng’s Whatever you want). This seems a good excuse to talk about buying digital music.
I didn’t really start buying music until 2007, because that was when it started to become possible to buy legitimate single individual tracks without either DRM or massive hassle, for a reasonable price. At that point it was still a bit of a pain; I joined the emusic site, which has done me very well but has some drawbacks. You have to sign up to the site, you can’t just buy one-off tracks. And you have to pay a monthly subscription, which doesn’t roll over, so although the official price per track is pretty low, you end up paying more than that unless you’re incredibly organized about making sure you finish each month’s allowance (which I’m not!) It was still worth it to me so that I could get DRM-free versions of commercially released songs. The alternatives at the time were iTunes which sold you music you could only use on the same computer where you made the payment (and what on earth is the point of that?!), Napster where you paid a subscription to listen to music but didn’t actually own it, and various greymarket or outright illegal Russian sites. There were other DRM-free sites, but they were also effectively self-publishing outfits, they were selling music by quasi-amateur musicians who hadn’t signed contracts yet and needed exposure even more than they needed money. Nothing wrong with that, but I wanted to buy music that I’d actually heard of as well.
For the ten years before that, I didn’t buy digital music at all, because only these inferior alternatives were available. In fact, when I first got online in 1997 you couldn’t legitimately buy music online even with DRM. If you wanted to own music at all, you had to buy a CD, usually paying £12 to £15 for the whole album even if you only wanted one track. And you had to take on the responsibility of storing and transporting the CD, which was not a small thing when I was moving across the country 6 times a year. Even since graduation I’ve moved from England to Scotland, from Scotland back to England, nearly moved to Australia but had to cancel at the last minute, moved from England to Sweden at short notice, from Sweden back to England, and across the country. That’s not unusual for my peers; indeed I’d say that I’ve had more of a settled life than many of my friends, staying in the same place for three years at a stretch. The fact that I’ve been able to keep any of my possessions at all is mainly because I have parents who have a big house and a lot of generosity, so they were able to look after things temporarily while I was moving. When you take into account the fact CDs have a limited shelf life, digital music made a lot more sense, but it just wasn’t (legally) available.
There was a huge, thriving black market for mp3s back in the late 90s, and yet nobody seemed to see this as a commercial opportunity. I would gladly have paid for legal mp3s if they had existed, and I was in no way alone among my friends, even students who had fairly limited incomes. Instead I ended up scouring the internet for contraband (AltaVista is, to this day, the best search engine for finding individual tracks; Google has never overtaken it because it assumes you’re looking for information, and routinely discards pages that are just catalogues of available files). I joined in schemes that were the precursors to today’s torrenting and peer-to-peer networks: FTP based systems where you uploaded a desired track and in return got a password that allowed you to download what was already there. Some of my music I acquired because college was one giant network. We were the very narrow generation who had completely unmonitored T1 ethernet, which was used for a lot of LAN gaming and other distinctly non-academic purposes. It was the culture that you made your music folder available to the whole college network, and people helped themselves to any music they liked the sound of.
Thing is, you might imagine that music labels (and consequently artists) lost a ton of money due to this sort of behaviour. In fact the situation is quite the contrary; the very minute that it was possible to buy this music legally, I couldn’t wait to give them my money for music that had a special place in my heart since it was the soundtrack to such a formative part of my life. But it took ten years for said labels to grudgingly allow me to give them money.
The situation isn’t a lot better nowadays. Apple, bless em, finally jumped onto the mp3 bandwagon; once the iPod became a fashion item, everybody started making portable mp3 players, and phones which double as mp3 players, and there’s clearly a market to buy actual music to put on them. But a lot of the major labels are still insisting on DRM’d music only, which means you often can’t buy it at all outside the US (even without considering the multiple other disadvantages of DRM). And you have to download some software which only works on one or two OSs, so if you have anything non-mainstream or just old you’re out of luck. So here I am, still pirating music, because the copyright holders simply refuse to sell me what I want, at any price at all, let alone a reasonable one. I pirate a lot less now; I’m reasonably happy to buy just whatever is available in mp3 format, and do without stuff that isn’t. But there are some songs I really like that are still, even today, simply not available legally at all.
How do I choose what to buy? Well, I get recommendations from friends, often sub-legally since you’re not technically allowed to give your friend a copy of music so they can get into it too. Of course this is exactly like the whole 80s thing where “home taping [was] killing music”; in fact, most of us discovered new groups which we spent money on because our friends made mixtapes for us. (doseybat has been a huge, lifelong influence on my musical tastes and probably caused hundreds of pounds to flow from my bank account towards the artists she recommended to me when we were teenagers.) And I use various internet services that point to music that has something in common with what you already like. Back in the 90s it was Yahoo radio where you could make custom stations, and they weren’t bad for music discovery. Nowadays it’s Last.fm and the wonderful Pandora music genome. Of course, the music industry goes to huge lengths to restrict these services, making them available only in the US (unless you use hacks and cheats to get round their restrictions). Because in the US they can put pressure on the courts and the legislature to make providers of such services pay exorbitant fees to run them at all, eventually driving them offline because just to cover their costs they have to charge more than consumers are prepared to pay.
I’ve come to the conclusion that the music industry don’t actually want people like me to spend more money on music, which you’d think would be in their interests. No, they want the musical tastes of the public to be completely predictable and ideally entirely dictated by the companies that own the music. They want to make sure that nobody ever discovers a new artist except by listening to mainstream commercial radio, and nobody ever buys any music except from distributors which only really stock the same songs that show up on those highly controlled radio channels. Never mind the long tail, never mind the convenience of an entirely digital distribution chain, they want to know exactly which artists they should sign because consumers can spend money only on the artists that they have chosen to back. They’re using mechanisms like DRM, like really disproportionate reaction to copyright infringement and “file sharing”, like closing down music discovery services and removing fanvids from YouTube, not to protect copyright as they claim, but to make the music market predictable.
This is pretty grim for actual musicians, and not that promising for consumers. I’m just hoping that capitalism will prevail, and the desire of millions of people like me to find a way to spend money on music we like, in convenient format, will create enough of a demand that the market will act to fill it. There are traces of it; the fact that it’s even possible to buy plain mp3s from mainstream distributors like Amazon and iTunes is a promising sign. But it is very weird to find myself in the position of being almost forced to steal things because nobody will sell me them!
August 19, 2010
I’ve been thinking vaguely about issues around porn since reading the discussion chez Yuki Onna. I’ve come across a couple of other essays on the subject too, mostly because of that bias where something that’s in your mind already seems to be all over the place. Mostly considering what an amazingly polarizing issue it is among feminists and other generally liberal types I associate with. So I thought I should probably have some kind of opinion on it…
June 28, 2010
When I went to university, I left an almost exclusively female environment for a male-dominated environment. The differences I noticed were very small, and all positive. But this weekend I returned to my old college for a reunion, and there were several things that started me thinking.
From the age of 8 until A Levels, I attended an all-girls school where nearly all the teachers were female. It took me a couple of years to reach the point where I was considered acceptably feminine by my peers, and even after that I was sometimes a bit of an outsider. That said, the general attitude at school was that academic achievement mattered, and girls who were perceived as caring more about clothes, fashion, makeup, appearance etc were looked down on much more than I was for never quite being girly enough. And the school environment really reinforced the attitude I had from my family, that I could do anything I wanted, that academic success and assertiveness were rewarded, and that science was king.
Though on the down-side, school was often intensely homophobic (I have the impression this is not typical for girls’ private schools of the era), and there was some associated gender-policing. Computer literacy and domestic skills were actively discouraged, on the grounds that we should aspire to something “better” than mere pink collar secretarial or clerical work, and not even think of devoting effort to becoming good home-makers. I remember our headmistress refusing to pass on an advert offering pocket money jobs as note-takers for Cambridge students with disabilities, because that was too much like being a secretary. (I found out about said job via other channels and took it anyway, and I’m very glad I did!) I think on the whole it was better to be encouraged to learn lots of maths and science and discouraged from learning to cook and sew than the other way round, but there were definitely some prejudices there. (And the IT thing was at least partly just failure to predict the future.)
I did quite a lot of research into where I should apply for university, though not as extensive as a motivated school-leaver these days, as I didn’t really have access to the internet. School kind of pushed the most academically able towards Oxbridge; in my case this was good advice anyway. When I chose a college, I was aware that my first choice had been the last of the former men’s colleges to go mixed, and had the smallest proportion of female students in Oxford. This didn’t bother me at all. My college might have had a ratio of 2 men to 1 woman at undergraduate level, and a tiny fraction of female faculty, but it had recently appointed the first woman in history to head a mixed sex college. My college tutor was to be a female professor, a rarity within the whole university and the whole of science academia, not just in my particularly male-dominated college. And I was applying to read Biochemistry, the one subject with a 50/50 gender balance, not just in student numbers but in distribution of grades.
Unlike a lot of my class-mates, I had plenty of male peers as a teenager, partly through the Jewish community, and partly because I have two brothers close in age and their friends were often part of my circle. (Not, I should add, in a “hot sister” way since I had the good fortune of not being at all hot, which meant that the most obnoxious teenaged boys didn’t deign to notice my existence, and the slightly less obnoxious never tried to see how far they could push minor sexual assault in order to impress me or their mates.) Also, I had the confidence instilled by both home and school that I could succeed, that there was no reason to believe that boys were any more capable than me because of gender. The not being hot thing also meant that I completely screened out any advice I might have picked up from the surrounding media about needing to pretend to be stupid or demure in order to “get a boyfriend”; I just assumed that such a thing as a boyfriend was completely unattainable.
Anyway, I turned up at university and took to it like a duck to water. I never felt outnumbered, though of course I was! (To be fair, I think science students tend to socialize more in subject groups than in colleges, and in the biochem department I was not outnumbered at all.) I enjoyed the academically competitive atmosphere of college, and never had a problem speaking up in tutorials. In this respect I had advantages over many female students, who found that a heavily gender-skewed environment did not at all suit their learning styles, who found it hard to get attention from tutors and all the other stuff that makes equally able women do less well than their male peers. I also found that the burden of trying to live up to a gender I didn’t understand was completely lifted; I was allowed to just be me, and nobody cared whether I was feminine “enough”. In addition I got into the LGBT scene, being, in fact, bi, and that gave me lots of tools for looking critically at gender. There was essentially no direct sexism, and I was too thick-skinned to even notice the institutional kind. I made male and female friends, both in college and out.
Anyway, the thing is that this reunion was partly in honour of this first ever female head of college, who is retiring this year. Since I’m a great fan of hers, I decided to attend. She has her portrait up in the dining hall, which captures her sardonic smile and doesn’t particularly draw attention to the fact that she’s the only woman among old white men spanning three quarters of a millennium.
As part of the event, there was a talk from some guy high up in the FSA about the financial crisis and whether it could have been foreseen or prevented. During the questions afterwards, I noticed that question after question was coming from the men in the audience. And then I realized that, well, it’s mostly older people who bother coming to college reunions, and that meant that nearly all the women in the audience were spouses rather than alumni. And then I thought I’d better ask a question just to balance things a bit. It wasn’t a very brilliant question, but lots of the questions from men were similarly just demonstrating general intelligence and a bit of bluster, not particularly detailed knowledge of banking and finance.
A few more of my generation showed up for the farewell lunch. Not a huge sample, certainly, but it was universally true that every woman I spoke to of around my age is currently taking a career break to raise a family, and no man I spoke to is. There are quite a lot of intra-college marriages, actually, so this isn’t about alumni versus spouses. It doesn’t prove anything at all, it just struck me how completely one-sided the situation is.