Scott D. Crawford: ‘Equity & Inclusion in Accessible Survey Design’

by | Mar 3, 2022 | Climate Surveys, Diversity, Equity, & Inclusion, Events, Survey Research

This seminar is part of two seminar series:

Below you will find the transcript of the presentation Equity & Inclusion in Accessible Survey Design, as well as an option to download the original slides.

Transcript: Equity & Inclusion in Accessible Survey Design

Fred Conrad:

Today’s speaker is Scott Crawford. Scott is a research consultant and founder of SoundRocket, a social science research firm located here in Ann Arbor. Scott has a master’s degree in applied social research from the University of Michigan, which in many ways is the precursor of the current program in Survey and Data Science. Scott has focused his career on the use of innovative technologies in social science research, including web, multi-mode, mobile surveys, and internet of things. He has assisted in the implementation of survey research projects for hundreds of researchers and research institutions, and has led innovative collaborative research projects, including the National Campus Climate survey.

Recently, Scott has overseen several higher education-based campus climate surveys of diversity, equity and inclusion, including at the University of Michigan, where a focus of the study design has included maximizing the ability for individuals who can use screen reader technologies to participate equitably, the topic about which Scott will be speaking today. It’s a great pleasure to welcome Scott Crawford today. His talk is entitled, Equity & Inclusion in Accessible Survey Design

Scott D. Crawford:

Thank you, Fred. It’s amazing to be here and talking with you all today. So, I’ll dive right in. My name is Scott Crawford. I do work at SoundRocket where I’m the founder. I also kind of consider myself a research consultant. I am a white, cisgender man, very early fifties. I’ve emphasized very early fifties. Still getting used to that idea with graying hair, full beard. My pronouns are he and him. 

I’m going to start with some common issues that we’ve found in survey research with regards to accessibility, and most of the conversation today is going to be focused on technology. I’m going to talk a little bit about how we administered some technical solutions to solve some of the problems that we’d had or seen or identified before. But it would be remiss to just completely skip over that and go entirely to the technology without speaking about some of these other things.

So the language that you use, whether they’re translations or even just the basic readability of the language you’re using is part of accessibility. You need people to be able to understand what you’re asking them in the survey. Literacy levels of your responders are important to consider as you’re completing your survey. It’s a common theme that I’ve seen in academic circles where we spend a lot of time talking about concepts and theoretical propositions and things that really don’t make sense to the average person. And then some of that language seeps into the survey and that becomes really inaccessible. 

Consider how visual content is being used. So if you do have a survey where there’s visual content in some form, consider how that is being used. And that starts to cross over into some of the technological issues, but it doesn’t have to. Having a survey where someone is having to look at an image, and seeing the image itself is required to answer the survey adequately, is not very accessible to those who have sight issues, that see less than necessary for doing that. This also applies to some of the issues that I’ll be bringing up in the technological side. There’s slider bars. That’s a concept that is very difficult to do without a visual component. Semantic differential scales, which we’re going to talk about in detail. Use of images, and even colors alone. There’s a lot of research that shows how colors and other things in self-administered surveys, like web surveys, might have an impact on survey response when people see those colors. And so you consider what might be happening if people aren’t seeing those colors as well.

And then multiple modes of data collection; this is one of those that you don’t want to rely upon mode choice as an easy solution to provide accessibility. Multiple modes of data collection are a good thing, but make sure that you’re doing that in a way that does really have the equity and the accessibility in mind. It’s not equitable to submit someone to a telephone interview with an interviewer on a sensitive topic if everybody else in this survey is taking a survey on a self-administered web platform. So things like that need to be considered as you’re going through the design.

So why does it matter? All of you here are probably here because you know it matters. I’m going to be speaking to the choir a bit, but I do think it’s important to put some of this in context. My first job in survey research started in 1996 at the Survey Research Center. I came in as a research assistant. And having been in the field since then, I’ve seen some things happen. Survey researchers develop these certain strategies to cope with the limitations, sometimes perceived limitations of the technology or of the design of the surveys that we do. And with the interview administered surveys, poor interface and personal interview, telephone interview, instruments are often just handled by adding interviewer instructions.

[read more]

So if we can’t quite get the system to work as we want to, we add these instructions in to train the interviewer how to accommodate how to deal with that. Anybody who has been in the field probably has seen that happen. ‘Let’s train them around this problem.’ It’s a common thread. It works, but it’s inconsistent, relies upon the interviewer to engage, and maybe it’s not the right way to do it if you really want to be thorough. Then we see difficult user interfaces in self-administered surveys, like web surveys. 

So if we can’t quite get the data the way that we want to in a web survey, maybe because we’re going to request some visual element, then another way may be for us to bring an interviewer in and have a telephone call, which could work. But we’re lacking this truly compliant web survey solution, and we need to provide some sort of an alternative. And rather than trying to figure out how to make their web survey compliant, we sometimes get a little lazy and just go back to the easy way that we know.

So, there are these tendencies that I’ve seen, and I’m noy pointing any fingers at organizations. I think every place I worked, including my current organization, has done this from time to time. And it’s time for us to step back and really think about, is that the right way to go?

So now back to the data side of this: why does it matter? Consider just for a moment, if you are designing a nationally representative study, and in this nationally representative study, something about the way that the study is designed makes it so that people in California, Texas, and Florida – the top three populous states – are not able to participate at all in the study. Wouldn’t be very nationally representative. You would be rightly criticized. Well, that’s about the same percentage of the population who report some form of disability in the US; it may not be a disability that might interfere with them participating in the study, but so let’s go a little further. Let’s just think about Michigan, Ohio, and Illinois. So the Midwest, let’s take them out of the study. Put California, Texas, and Florida back in. If we remove Michigan, Ohio, and Illinois, right there, we’re at about 11%. And this is about the same population in the US as people who have difficulty concentrating, remembering, or making decisions.

Now you can start to see that while those people are probably having a little bit of a difficulty with some surveys, if they’re having hard time concentrating, remembering things. There’s a lot of recall activity, a lot of cognition required to do this. 

So then let’s drill down one more level. And now we go back to just having Michigan and Wyoming and Vermont and the District of Columbia. Let’s throw in Alaska and North Dakota as well. That’s about 4.6% of the population. And that represents about the number of people who suffer from blindness or serious difficulty seeing. So here now, we’re getting at some of the real impact of not having the design that actually takes into consideration some of these issues. You’re risking having almost 5% of the population of your study not really being represented appropriately.

So, if that isn’t really convincing, there was a great article just recently published, some of you may have seen it. A recent CDC study, the National Immunization Survey just this last October, published that adults with disabilities were less likely to report receiving at least one dose of the COVID- 19 vaccine. So this is a real application, where disabilities themselves are having an impact on their lives in ways that hadn’t been thought about much before. And if you’re studying these topics and you’re looking to try to understand them -and you have a design that also misses these people – that might be a problem. They were also more likely to report barriers to vaccination, including getting an appointment online, knowing where to get vaccinated, and getting to the vaccination site. It is really important that we find ways to make sure that these individuals are included in our studies.

There are a lot of different technologies that can be used in engaging with people who are disabled in various ways. We’re going to look at technology and survey design to improve that interaction, increase that engagement and ultimately improve data quality. We must consider those who use computers in non-standard ways. They may not have a keyboard, as many of us have; they may not have a mouse, as many of us have; they may work and interact with our computer via voice. And these assistive technologies, accessible technologies, cover a large segment of software products. There’s assistive hardware, screen readers, speech control, dictation, software mouse grids. 

They are typically tools that use a combination of things –  voice, et cetera – to display what’s on the computer. I would say to anybody interested in improving your surveys and how they work along these lines, I’d recommend diving into one or more of these products. There are others out there; these are four that I’m most familiar with. JAWS is the one you always hear about, the most prevalent. It is Windows-only at this point. Apple voiceover is the Mac operating system version of this. Chrome browser has a plugin that will help with the same issue. And then NVDA is another tool that can be used. At least two or three of these are free; you can get an academic or an educational license. I think students actually can get a free license to JAWS as well – download those and try them out, see what the experience is like because it can be eye-opening sometimes, and it’s great for testing.

Braille displays are also in use and certainly something to consider. I was amazed at the price. I just looked to see, what do these actually cost? and some of the high-end units can be up to $15,000, but most of them do cost $1,000, so they’re maybe not as common. However, within a University setting, I suspect that they probably are available for those who need them. But this is a technology that usually works in combination with a screen reader, so the contents are read to the user and the keyboard, with an adjustable braille display, presents the content that is showing up on the screen.

There’s a speech control or dictation software as well. That is another way to interface with the computer that has often come up. Dragon is a product line, it’s primarily Mac, although I think they do have some non-Mac solutions now. This is speech control or dictation software that helps you with content so you can order the computer to do things like send an email, or you can even write the email by dictation. Both Mac and Windows also have built-in features that help with that. Mouse grids are not something you hear about as much anymore, but every once in a while. This usually is paired with a magnification process. 

So our challenge here, back in 2016, 2017, we had the opportunity to work with the University of Michigan to conduct its first large DEI survey. Those who are affiliated with the University probably got an email from me, if you were there back then. And we’re doing the second data collection on that right now. But back in 2016, 2017, the study averaged about 15 minutes. It was a web-based survey, mobile-optimized. It was designed to be accessible to all as a census of all students, staff and faculty. There would’ve been a preview in the fall of 2016. There was a survey, a random sample survey, and then the following year it was a census. There was a random drawing for incentives and there was a lot of messaging that was coordinated with the University to get it out to people and to let people know.

And this was the effort where we really wanted to ensure that everybody had the ability to participate. So we had the opportunity to conduct a diversity equity inclusion survey as a census, we wanted to show that the study would be open available to all students, faculty, and staff who wish to participate. We wanted them to know that participants should be given an equal opportunity to participate confidentially, so the options where they would have to interface with an interviewer wouldn’t be ideal. And we wanted to adjust to the screen readers and other assistive technologies, and we did really well there. That’s where our focus was for this effort.

Ultimately, we wanted to provide an equitable experience for all participants. So, as with any effort like this, we have some resources and some limitations. One of the great resources we did have here was that we had access to the IT accessibility and institutional equity experts within the University who helped us focus on our efforts to improve the functionality and the layout. These resources were invaluable for more on the technological side of figuring out how to adapt the web surveys that we had already been using and systems we’d already been using to be as compliant as possible. You’ll hear me say things like that – as compliant as possible. Like a lot of things, this is an ongoing, changing field that we are learning about every day. Some of it is just that we’re learning about it.

Some of it is that there are new technologies, new issues, a new awareness of types of disabilities, et cetera, that come out and you do have to pay attention. And what was workable five years ago may not be exactly workable today. But we implemented these technical changes across three different major categories. And I’m going to go into some of those specifics right now – but we did have some limitations. Time, and the University budgets are unfortunately not unlimited. I was taught in grad school at Michigan that you could always sit down and think about a survey project with this imagination of this unlimited budget of what you would do if you wanted to conduct this amazing study, a standard kind of thing, and then start paring back from there because then you have to talk about, okay, what are your actual limitations? And time is one of those, and we didn’t have years to prepare for this. Finance, that kind of comes hand in hand with time.

In the current survey platform, we had been asked to use a platform that was also familiar and used at the University of Michigan as well. This is one that we had been using. The  platform dictates a lot of what we can and cannot do. It is very good at these accessibility issues, however, there are some limitations that we had to work with and it’s not perfect. We decided that our primary focus was going to be on that screen reader technology. So that’s where we really felt like we could make the biggest impact. And then we had, additionally, a previously developed questionnaire. So this questionnaire came to us to be fielded. It was developed in part by a committee that worked on this for some time. We had some say in it, but ultimately there was a questionnaire that we had to work with. And as we’ll see, there were some limitations or some issues there.

So the first decision we had to make was okay, do we make this survey accessible to all just from the start where no part of the design is inaccessible, or do we have to have some sort of a branching or decision point? So what would we miss if we did that? Well, there were some measures, some questions within the survey that were really important to the research team and those measures required a visual component. We didn’t feel like removing those or removing the way that those are presented for people who can see was a good way to proceed.

Ideally, we wanted to find a way to ask those questions slightly differently so that they could be accessible, but also do it in a way that can be controlled and monitored and evaluated so that maybe we could demonstrate for the scientific community that this can be asked in a different way so that it is more accessible. So ultimately we did decide that we needed to proceed with a parallel instrument, which meant that there was going to be a branch point where we needed to identify if someone needed the accessible version or not. There was too much that we would be giving up scientifically at this point to make it work the other way.

So that’s the way we proceeded. So we first looked at automated ways to do this, and there are some ways you can detect whether screen readers are being used – but there are a lot of different screen readers out there and a lot of different technology, and it just wasn’t consistent. We couldn’t be certain about that. And we didn’t want to miss out on having some people who really felt for whatever reason they wanted to have the assistive system capabilities to be able to do that.

And so we decided that doing this the automated way wasn’t going to work. Instead, right at the beginning, we introduced a question. We asked the person responding – we had a statement that they could then select the box if they decided they wanted to use the system technology screen readers during the survey. So if they selected that, then going forward, the survey had a more fully accessible design. If they didn’t select it, they would have the standard design for our web surveys. So let’s look at some of those specifics of what we did. This is a very typical screen layout for one of our surveys. We have a header across the top that’s somewhat graphical and it gives people context to where they’re at in the survey. It’s not a functional header, you can’t click on it and say part two and get to that point, it just kind of shows you where you are in the survey.

There’s some repeated visual components, like a logo in the upper left-hand corner, a questions button on the upper right-hand corner, things that quickly a sighted individual going through this will start to just ignore. And that’s intentional. We put it there so they know it’s there, it provides context and kind of a frame for the survey but after that, it’s only there and only really there if needed for some reason. Unfortunately, for a screen reader user, this requires a lot of stuff to be read out prior to getting to the actual question on every page. So you would hear an alert indicating that there is a logo, a SoundRocket logo, and then title campus climate survey, and then questions.

And then part one, part two, part three, prior to even getting to what the question is, each time… and that would get a little old, it would not be an enjoyable experience. I know I would break off during the survey if I had to listen to that throughout every page. So we did modifications to how we designed the survey and we really just cleaned it up. You know, at the very top, we would get straight to the question. So the very first thing that the screen reader would do is we go to the question and read the question. The questions button was moved down to the bottom and out of the way. There’s another aspect that I’ll touch on in a moment, but this allows for the minimal amount of content to have to be read on every page. And then we looked at the grids and grids just in general are difficult, the accessible tables need the HTML markup to be fully marked-up where there are header cells and data cells and everything is defined in a way that makes it really clear to the screen reader technology, where how it is all related. It still doesn’t make it extremely easy to navigate a series of questions like this, though. It is difficult. 

So we decided on an easier and better solution to ensure people understood what the questions were and what their response categories were for each of these. We decided that an option of breaking that grid out into individual questions was the right way to go. Now interestingly, this is something we do anyway for mobile devices. And so if you’re taking a survey on a mobile device, you would already have seen the question like this, doesn’t matter whether you’re an accessible version or not. It makes it much easier to navigate on a small screen. One other thing to mention in this is that this is one of those areas where we found there was some limitation of where while we have some ability to get in and then increase, improve the labeling of parts of the question, when we get into grids, there were some areas that the survey software just would not allow us to get to.

So this was also done partially because we really couldn’t make a 100% compliant table using the product. In a situation like this, where there are text boxes, the screen reader would just simply read please specify text box, that if there were no specific labels marked and just the native way that this software worked, it would not put in any sort of text box description. So we had – this was one of the more time consuming parts of this work – to go into each one of these and introduce a custom label for that field, so that the respondent would understand what it was that was being entered there. The text we would put into something like this would say something like, please enter your other current degree here. And that would be read aloud.

As I mentioned earlier, going back to this question button, there’s some functionality that is very interactive and smart on the web today, where the contents of the question button are already actually on the screen. They’re just not visible to the user until they click on that question button. If you didn’t go through and set this, what you see at the bottom of the screen, the button ID set attribute to the area expanded, and ARIA, accessible rich internet applications. That tag is telling the screen reader that hey, there is some expanded text here, read this. False would mean there’s some expanded text, don’t read it. So it gives some guidance to the screen reader on what is relevant to be actually presented.

Here was the most difficult part of the change that we had to make, and this is because it dealt with questionnaire design. It actually wasn’t a technological issue, but we dealt with it anyway and we did introduce some technology solutions here, but this type of question is called the semantic differential. It’s fairly common in survey research. I’ve heard people swear by it. I think it’s a great way to get data at certain times. And I also hear, I’ve heard some people who really don’t like the question. So that’s not debating that today, but assuming that you do have a question like this, as you can see in the presentation, or if you don’t have sight, as you can’t see in the presentation of the words, it really does require a visual presentation. There is a horizontal display of radio buttons that are unlabeled. There’s five of them in between two words.

So on the top example, disrespectful is on the left. Then there’s five horizontal radio buttons, and then respectful on the right. You have to be able to really see that to understand how you’re going to respond to this. The instructions itself kind of insinuates it a little bit, but it doesn’t give any sort of direction as to which radio button is which, and someone getting this on a screen reader would actually hear something like disrespectful, radio button, radio button, radio button, radio button, radio button, respectful. Now that you’ve heard that, how would you go back and decide which radio button to select? That’s a difficult issue.

So the way we dealt with this was two things. First, we needed to update the language. We needed to change the way that the question is actually read. So this old text became a little longer, but instead of, we had to step, we had to find a way to step through these items. So the new text was for the next few questions, we will ask you to think about scale from one to five. You’ll be presented with a word pair, where one represents the first word of the pair, and five represents the second word of the pair. Thinking about the words friendly and hostile, where one represents friendly and five represents hostile, including all values in between, which adjective best represents how you would rate your institution based on your direct experiences? That second paragraph there, the thinking about the words part, would be repeated for every word pairing that there was. Not ideal, but we felt like this was the best way to get this across in a screen reading environment.

And this next screen is just how it would look different. So how did this all work? Those were the things that we did to improve upon the areas that we found were lacking and where we needed to approach to improve upon this. We did get 364 participants who opted for the accessible version. That was a success. I mean, there’s tens of thousands of participants in the study. So put in context, it was not a huge percentage, but 364 people opted for that version. They took about 12, a little over 12, 12.1 minutes to complete the survey versus 11.5 minutes. So they took a little bit longer, but not much longer. That was good to see. They were more likely to report some types of disabilities, including ADD, 43% versus 22%.

They were more likely to indicate that they were blind or low vision, 14% versus 2%. They’re more likely to indicate they’re deaf or hard of hearing, 21% versus 5%. I should just say that the sample sizes are small here. So just be very cautious about these numbers. I considered not even including them exactly. They’re very small numbers. So just the caveat there. We’re hoping that this second iteration will give us more data so that we can start to get a little more definitive about some of this. They were less likely to report chronic illness or mental health as a disability though. That was an interesting difference. So there was this oddity though that we noticed. Only 4% of those who used the accessible version self reported as having any form of disability. So that means that 96% of those 300 and something people didn’t self-identify as having a disability, but they still chose the accessibility version.

Why? They may have simply decided not to self-disclose the disability. That might be the case. They may be survey research students, staff, faculty curious about the functionality. There are some of those at the university. It might be something else. So we really started to learn and see what we can learn about this. Substantively, on the topic of DEI, did it matter that we were more inclusive? Well, I think the data has shown us that we were. Looking at the satisfaction with overall campus DEI climate, the data showed us that there was a significant difference between people who indicated they had a disability, and those who did not. And that is if we’re doing things to increase the participation of those with disabilities, we’re getting more at where there might be some issues. And the data shows that there might be some issues here.

So there was justification there. I think we were happy that we had gone through the efforts that we did when we looked at this data. Of those who used the accessible version of the survey, they also did report a higher level of discrimination regarding their ability and disability status. It was not a significant difference, very small sample sizes, almost significant. I’m putting that in there. 3.7% did not use accessible surveys versus 6.3 of those who did. So that’s probably with some additional data, we might have something that’s significant show up here. Looking at racial and ethnic identity, and again, these are people who used the accessible survey version. They felt that they had a higher rate. They had had discrimination or felt discrimination about their racial ethnic identity in the past 12 months. So there is something happening there too. It’s not just people who are disabled.

And then we saw this around country of origin. So those who used the accessible version also reported a higher level of discrimination based on their country of origin. And this struck us as interesting because we know that about 15% of the students in the study were reported to have been born in the U.S., and that’s using some institutional data. And then overall, about 70% of the population reported to be born in the United States. So maybe, potentially up to 30% of the population was not born in the United States, but seeing data that suggests that 21.5% of those who use the accessible version of survey felt discrimination about their country of origin was kind of a clue. And we thought that’s interesting that that high proportion were there. So who actually used the accessible version? Well, it turns out that 50, just over 50% of the respondents who used the accessible version reported that they were not born in the United States compared to 26% using the standard version.

And then we looked at citizenship status, 35.2% of accessible version respondents were classified as non-resident aliens compared to only 14.1% of the standard version. So we had a much larger percentage of people who were born outside of the U.S. and that were not citizens participating in the accessible version. We unfortunately didn’t have any data to tell us, so what does that mean? But our working theory is that we believe this is reliance on screen reading translation tools. Just like screen readers are used for helping people with low sight, you can have a screen reader do an automatic translation for you. This raises a whole lot of issues for survey methodologists, as you can imagine. Usually, we’re very controlling about how, what sort of translation is presented to respondents within a survey. We don’t know if this was actually happening, but I suspect this is probably what it was, given the population that they were seeing that they could have an accessible version. They know, from their experience of using screen readers for translation, that accessible versions are easier. So they selected that so they can have an easier path through the survey, how that translation may have impacted the questionnaire and the data is a whole another question that really requires some research at some point.

So I’m onto concluding thoughts, we’ve through a lot of material, a lot of information. And I think these concluding thoughts are fairly simple. There’s a lot out there that we can do better with. I think it’s time for survey research to not be passive. We need to expect to learn it each step. And I’m going to tell you a little story about how we fielded this survey in 2016, 2017, and we went through a lot of efforts to ensure that it was working very well for screen reading technology.

We didn’t make a significant change to it in the current administration of the survey. However, it turns out that there is something that we were not able to accommodate last time that is a bigger issue today. There is some additional labeling that we can’t get access to within the survey system as it stands, that does cause some issues with making sure that there’s a connection between a question and response categories on any given screen. Our survey designs were typically presented in a way where there’s one question and one set of response categories per screen with some exceptions, like where we broke the grids out. So I think we minimized some of the issues there.. We have learned something new, and I can guarantee when we do this again, we will have addressed that. It’s very possible that this next move might require that we move to a different survey system, or even develop some of our own. But we’re exploring that. We’re looking at ways to make sure that we get better this time. Don’t be passive about it, expect to learn it each step and if you think you’ve got it, it’s probably better to just open your eyes and see where we maybe are going to be able to do a little bit better next time. I think we are seeing clearly that accessible and equitable designs can reduce non-response bias and also just generally data quality. If we have people who are participating in the surveys that would not have otherwise, we are improving, we’re doing better. And we’re getting at respondents that would not have responded otherwise.

Equitable and accessible design is good design for all. So the two things that really point to this is one is, a lot of the things that we did for an accessible design are also things that we did for mobile design. There’s a lot of crossover there. If we’re designing well, it should not matter what sort of device we’re on or what sort of technology you’re using. There are standards out there that we can achieve, that we can strive for. And we shouldn’t be too persuaded by glitzy and fancy features and capabilities that might look great, but reduce our ability to actually feel the survey fully and equitably.

And then lastly, the equitable and accessible design may also bring in others to the survey that you didn’t think about. Now, there is a lot that needs to be learned there. We certainly understand that a good number of people use the accessible design and we don’t know why. A study is definitely warranted, one that we hope we can get an opportunity to do, but in learning about that, I suspect that we’ll find that a lot of that is just simply the technology that people are using that we’re maybe used to thinking about, as survey researchers who probably primarily use keyboards and mice and interface with a monitor and who are not thinking about these other ways, the translation issue or other things. And so we may be accidentally surprised by some additional benefits of doing this. 

Fred Conrad:

I just want to thank you so much, Scott, for this terrific and thought-provoking presentation. And I hope that kind of in the spirit of your topic, the audience will join me in applauding you both auditorily and visually.

[/read]

Download Slides: Equity & Inclusion in Accessible Survey Design

Name

At SoundRocket, we cut our teeth surveying students in higher education. Their access to email and web technologies made our services an excellent fit for academic researchers who wished to engage in innovative methodologies. We have built upon those successful projects, with a growing list of large-scale standardized research studies led by scientific research teams. If you’re in search of a partner for your climate survey of higher education settings, consult with the experts at SoundRocket.

About the Author

SoundRocket

Understanding human behavior—individually and in groups—drives our curiosity, our purpose, and our science. We are experts in social science research. We see the study of humans as an ongoing negotiation between multiple stakeholders: scientists, research funders, academia, corporations, and study participants.