Interview with Dr. Bill Axinn, Survey Expert & SoundRocket Collaborator (Part 2)

by | Dec 23, 2021 | Climate Surveys, Collaborators, Higher Education, Survey Methodology

Soundrocket conducted an interview with University of Michigan professor Bill Axinn. Bill is a valued client and collaborator, and this post outlines Dr. Axinn‘s new paper and explores the importance of responsive survey design in campus surveys. This is Part 2 of a two-part interview; if you’re just catching up, make sure to check out Part 1!

Bill, I know there’s a paper coming out, titled Applying Responsive Survey Design to Small-Scale Surveys: Campus Surveys of Sexual Misconduct

Bill Axinn:  Yes, we’re very excited. It’s coming out in a journal called Sociological Methods & Research, which is one of the top journals in the field of Sociology. I’m excited as a sociologist, because the methods that we use to study sexual misconduct are methods that could be useful for a lot of other sensitive topics that sociologists study. And sociologists often study at the smaller scale – not a huge national survey, like the US National Survey of Family Growth – but small campus surveys, in small organization-specific surveys, or small community surveys. In my field, we do that kind of work a lot, and so we’re very excited to have those tools get more accessible to folks doing other kinds of surveys.

What are some of the interesting findings that you would call out from that paper?

Bill Axinn:  Sure. And as you might guess – a paper that I wrote and worked on for years – I’m willing to talk about it much more detailed than most people want to hear about. I’ll try to stick to the highlights.

Sure, just the highlights.

Bill Axinn:  Number one is that we were able to test, with randomization, two different levels of incentive. There’s no doubt that when you pay an individual for their time – reimbursing them for the expense of spending their time answering this questionnaire as opposed to doing something else – when you pay them to spend their time that way, participation rates increase. That’s very normal, and well-documented. 

What’s less clearly understood is how increasing the amount you pay them affects participation. And we were able to run that comparison by randomly assigning people. It’s a 15-minute-or-less survey. And we were able to randomly assign them to either $15 for 15 minutes, or $30 for 15 minutes. And by randomly assigning, we were able to learn that it makes almost no difference. So it’s easy to conclude, if that makes almost no difference, there’s no need to spend $30. 

And then the other thing that we did was based on the question about how human beings understand different types of communication, as I was just discussing. We took a subsample of the people who didn’t respond. So typically, as you send out these invitations, sometimes through the mail, sometimes by email, sometimes both; and after that, they either respond or they don’t, you might send a reminder, you might send two reminders. And they either respond or they don’t. We did that for about three weeks, and we stopped. We took a half-sample, 50% of the remaining people who had not responded, and we called them on the phone. And we didn’t call them on the phone to collect the data. This is very important. Because we want them to do the data collection via the web – again, high security protects their answers, no one has to hear what they have to say. It can go straight into encryption in a web survey server. But we called them to ask them to do the survey – and answer their questions about the survey. You know, and some of those people said, ‘Wow, I didn’t know this was important,’ or ‘Wow, I just deleted that without looking.’ You know, there’s all kinds of reasons why an email or a letter doesn’t get the job done. And what we found is phone calls to ask people to do the campus surveys significantly increased the response rate. And even more important than that, it changed the kind of people who responded. And so that is a really big deal in this world. 

Okay, you did a survey, okay, you’ve got some data. Is that data really representative of the population you say it represents?

Sampling is the science of selecting the cases so that they’re representative of the target population. We know how to do that. But if the selected sample doesn’t participate, then it’s not going to be representative of a population. And my understanding is that in web surveys of institutional populations like campus students, staff, etc. is that response rates of 10% are common. Response rates of 15% are common. In fact, some people talk about response rates of 18 or 20% as a huge achievement. By adding the telephone calling, and the individual incentive – the $15 incentive per person for completing, by adding the phone call on top of that? We got it to a 67% response rate. And, and the difference between just the $15 and the addition of the telephone call brought in additional people of a different type. 

The paper documents some of this, and I think, the phone calling – because it’s a new form of communication,  and it allows you to answer questions about, ‘What is this, why am I being asked to do this, is it safe?’ Answering those questions is always going to be effective at raising response rates. I think the kind of people it helps bring in depends on the topic of the survey. And so, you know – pick a random example – this was a survey of sexual misconduct. And that change brought in more men. Well, there’s some theory and survey methodology that those people for whom the topic of the survey is the most salient are the most likely to respond. A tough thing to talk about, but sexual assault happens to both men and women. It does. But it happens a lot, a lot more to women than it does to men. And so it could be that a lot of men just don’t feel that they have much to say about that topic, and so they didn’t respond to the survey. There’s other examples of different groups that came in at higher rates for the phone calling. But I think, I think that’s a big piece of the headline there is that, you know, extra money? Not really a big difference. Calling people up, answering their questions, substantially increases response rates and changes the kind of people who ultimately respond. It’s a more representative sample.

I’m sure that must have been exciting for you. Quick question on the cost comparison, because obviously, the $15 is a cost, making a phone call is a cost, can you justify those costs?

Bill Axinn:  Yes, actually it’s about the same cost to call respondents as the additional $15 incentive we tried. And, you know, I say you can justify these costs because of the way we did it. One of my teachers and former colleagues, a guy named Robert Groves, who’s in the National Academy of Sciences for his work on survey methodology, has a great book title, which is called “Survey Errors and Survey Costs” where they appear as mirrors – mirrors of each other. And it is true that errors and costs go together. But it turns out, when you have a fixed budget of any level, you have choices about how you spend that fixed budget. And in this case, instead of giving the extra $15, you know – if we take that same amount of money and put it into phone calls, we could do a good enough job to get responses from this half-sample of the nonrespondents. Now, of course, you could have picked all of the nonrespondents, you could have picked 10% of the nonrespondents – how much that costs is really dependent on the size of your survey. The University of Michigan has a very big student population. At the time, it was about 47,000 students, I think now it is closer to 50,000. But you know, that’s a very big student body to try to represent.  A sample size of 3000 or more would be normal, and so what it costs to do the phone calling is very dependent on how many people you choose to do it for.

Speaking of costs and quality of the survey: having an outside party like SoundRocket, it’s probably more expensive than having grad students do the work, for example. Do you see a justification for that?I know you use them because they’re an objective third-party, but do you feel there’s an advantage to sometimes having an outside party, cost-wise?

Bill Axinn:  Yeah. You know, I work in a university, so there are lots of people doing campus surveys or other kinds of data collection with their PhD students or other kinds of student labor. I think, you know, in some sense, there’s some subsidy for those costs going on, right? And maybe that makes it feel easier. I think the difficulty in this is that the third party is going to affect the subject matter. And I would argue that pretty easily about experiences of sexual assault. Say a graduate student calls you and says, ‘Hey, I’m here to recruit you for a survey.’ The thought process is likely to be, ‘why are fellow students asking me about whether I as a student have been sexually assaulted?’ I, you know, I don’t think that’s going to go too well. I think it’s going to bias who participates. 

And I think, as another example, with a topic like Diversity, Equity and Inclusion, similarly, having people from your own institution, your fellow students, fellow staff contact you – it’s probably going to bias who participates in the study. And then bias with respect to what you’re trying to measure. In this case, it could be experiences of being discriminated against. And you have to ask, are people going to be forthright about that, when they know they’re speaking to students? Are there other issues?

I also think that the quality of the interviewing itself is very much a function of both the training and supervision of whoever’s doing it – whether it’s students or professional staff. And I think, you know, topic-by-topic, we just have to take the time to think through: what are the potential consequences of having it done this way versus that way?

I don’t want to take too much more of your time, but who would benefit from reading this paper?

Bill Axinn:  Thanks for asking that. You know, as I said, as a sociologist I’m excited, because other sociologists who are trying to do a study would benefit from reading the paper. That was the intention of publishing it in a sociology journal. Of course, we thought about publishing in a journal of higher education as well; we’ve published some of the results from that survey – for example in the Journal of American College Health, because I think professionals in higher education – or probably any professional organization that wants to do a survey of itself, to better understand climate, particularly with regard to a sensitive topic would benefit.

When you consider sexual misconduct – and by the way, I mean harassment as well as assault – and topics of discrimination, there is certainly something to be learned. But I also suspect there are a lot of topics like this, say mental health or substance use, other kinds of topics that are important in institutional settings; I think anybody who’s thinking about conducting a survey like that: have a little look at this thing, and see the pluses and minuses. There’s a theory section there that’s designed for sociologists. I can imagine some professionals wanting to skip over that and get to the tables and the conclusion, I get it. But I think this paper is for folks who want to understand what separates state-of-the-art methodology for doing such a survey, from ‘Hey, what’s the cheapest way we can check the box that we did it?’ 

Awesome, and congratulations. I’m always impressed and the research you do is very important. Is there anything you want to say, in conclusion?

Bill Axinn:  You know, I think I’ll make two comments in conclusion there. One is that this experience of doing a survey on the topic really opened my eyes. And as I had to confes​​s to the leadership of the University of Michigan that it’s not just that I’m male – I was like, a go to the library guy. No, I didn’t know what was going around me. And in retrospect, I’m ashamed of it. I think all of us don’t know how bad this is, and should be ashamed of it. But it led me to do some more research. And another paper that uses the National Survey of Family Growth, documented that young people of same age who don’t get to go to four years of college actually experience much higher rates of sexual assault than young people who do get to go to college. 

One point I like to make whenever I’m in a conversation on this very horrible topic, that’s very hard to look at straight, is: folks, we have a very serious problem here. And everything you’ve heard about college campuses, however true that might be, that’s a small part of the total story. And so I think it opened my eyes to what a very large-scale public issue this is.

I hope that with good scientific work on the topic, we’ll continue to keep the conversation going. I think one of the best things that doing a survey did on our campus – it didn’t magically make a problem of this magnitude go away – but it did get people real, concrete, specific data about how big and how widespread the problem is. So they could get busy talking about, ‘Now what are we going to do?’

This is Part 2 of our interview with Dr. Bill Axinn; to read more, see Part 1


At SoundRocket, we cut our teeth on campus surveys for higher education. Their access to email and web technologies made our services an excellent fit for academic researchers who wished to engage in innovative methodologies. We have built upon those successful projects, with a growing list of large-scale standardized research studies led by scientific research teams. If you’re in search of a partner for your climate survey of higher education settings, consult with the experts at SoundRocket.

About the Author

Derek Mehraban