Hi. It’s a pleasure to be here. I’m Sareh, and this is Catharine,
and as you know we work as developers at BBC News.
This talk contains two parts: The first part is how to get
all your team engaged with Accessibility and to be enthusiastic about it,
and secondly we’ll cover our Accessibility User Research that was conducted
on the Front Page of BBC News. So, over to Catharine for the first part.
So, Part 1: How to Engage your Team with an Accessible Culture.
We’ve just completed a 16-month project to build a new front page
for BBC News website. Both Sareh and I were a part of this team.
Quite often a team will have one Accessibility Champion.
An Accessibility champion is someone who champions Accessibility.
You don’t have to be an expert, however, you do have to be passionate
about Accessibility and be willing to shout out for this cause.
We have over 130 Accessibility Champions at the BBC.
From the outset, the Front Page team had more than one
Accessibility Champion: both Sareh and I.
As the project progressed, we achieved engagement with Accessibility
from everyone in the team. So, how did we achieve this?
Step 1: Engage all disciplines; everyone has a part to play.
Accessibility is across discipline solution, not just the role of a developer.
To meet Accessibility standards you need to have engagement
from everyone in the team; everyone has a part to play.
For Accessibility to be a success in our project,
we needed to engage all disciplines.
Your set-up might be slightly different. In our team this meant Product Owner,
Business Analyst, Designer, Developer and Tester.
To achieve engagement, we wrote Accessibility checklists for each discipline,
from Product Owner to Tester, so Accessibility could be considered
at every stage and so everyone knew what part they had to play.
I’ll now talk a little, in turn, about each discipline. First, Product Owner.
As a result of the Product Owner being engaged with Accessibility,
they commissioned Accessibility User Research on the new front page,
both lab-based and home visit. This was the first time this type of research
had been carried on the BBC News responsive web product.
More about this later.
Business Analyst. The Business Analyst wrote
Accessibility Acceptance Criteria for all parts of the project.
This was also the first time this had been done. The Accessibility Acceptance
Criteria was written based on using voice-over,
such as giving a user a screen reader and navigating by heading.
or navigate to the H2 component,
then the screen reader reads out “Around the UK”, “Heading level 2.”
And then the screen reader reads out “Around the UK”, “Reaching landmark.”
Designer. Design were also more engaged with Accessibility,
with designers thinking more about the whole experience,
including those with impairments, considering factors such as colour contrast,
colour blindness, touch area size,
the impact flexi-assess might have in content order and tab order
for screen reader users. Peer Accessibility reviews took place
on designs with Accessibility Champions along with discussions
with the Development team on things such as mark-up.
Developer. The Development team had a new, more in-depth checklist
to work to. They used BBC-a11y, a new tool from the BBC Accessibility team,
which checks code against the BBC Accessibility Standards and Guidelines.
They integrated Accessibility linters such as JSX-a11y
and tested with Assistive Technology throughout development.
This resulted in a high standard of Accessibility on the new code
written for the Front Page and increased Accessibility to shared code
that was used on the Front Page. Enthusiasm spread to other
Development teams and other departments who also used the shared code,
which led them to make improvements to their own code.
And finally, Tester.
The Tester also had a new checklist to work to.
They tested with Assistive Technology and used the Accessibility
Acceptance Criteria written by the Business Analyst to test against.
Step 2: Define Assistive Technology Priorities.
Everyone knows what Assistive Technology to test with and,
so everyone knows what the priorities are.
Assistive Technology – often shortened to “AT” – can make devices easier
to use and content more accessible for someone with an impairment.
This can include objects such as input devices, or systems
such as computer software, which increases or maintains
the capabilities of someone with an impairment.
Input devices can range from switch devices to braille displays,
and computer software can range from screen magnifier,
speech recognition, reading solution to screen reader, and more.
We wrote our Assistive Technology Testing Priorities based on
the gov.uk Assistive Technology Survey in 2016 and our own audience.
This meant we were now testing with Assistive Technology
other than screen readers, which had been our main focus before.
We now test from desktop to mobile and from screen magnifier,
speech recognition, reading solution to screen reader.
Step 3: Accessibility Laptops –
Easy Access for all to Assistive Technology.
For testing with Assistive Technology to be a success, you need to make sure
there are no barriers to accessing this. We all use Macs.
So, we got PC laptops specifically for testing with Assistive Technology,
which is PC-based, such as ZoomText, Dragon, JAWS, NVDA.
The laptops were shared between all team members
and could be accessed by anyone at any time.
This meant no more slow virtual machines and no Mac keyboard
where you needed a PC shortcut.
Step 4: Testing Step –
How do I use the Assistive Technology?
Next you need to make sure everyone knows how to use
all the Assistive Technology.
So, we wrote simple testing steps for all the Assistive Technology
defined in our priorities, from screen/magnifier, speech recognition,
reading solution, to screen reader. So, anyone in any role should be able
to use any of these Assistive Technologies comfortably
should be able to come to the testing steps having never used this
Assistive Technology before and be able to easily test a component.
Step 5: Accessibility Swarms. Everyone knows the Experience.
The fifth step to engagement, Accessibility Swarms.
Before a ticket moves from development to test, as a team everyone
downs-tools and gets together to run through the final Accessibility checks
and tests with all supported Assistive Technology. All disciplines can join in.
Ideally you want a mix of roles such as Business Analyst,
Designer and Developer. Everyone considers the whole experience.
You can talk through any issues together as a team.
It’s now a team responsibility; the onus is not just on one person.
And it’s more fun doing it as a team than on your own. Ed, our trainee, said:
“Doing Accessibility testing like this in a group makes the process more
enjoyable and has helped me get familiar with using different Assistive Tech.”
Five top tips to engage your team with an accessible culture.
1. Accessibility is for everyone; engage all disciplines
2. Define clear Assistive Technology priorities
3. Have no barriers to accessing Assistive Technology
4. Simple testing steps so that everyone knows how to use
the Assistive Technology
and step 5. Accessibility swarming through the final checks.
Here’s over to Sareh for Part 2.
Okay. So, now that Catharine has given you a bit of an insight into how we test
and deal with things internally throughout the course of the project,
I’m going to focus on recent Accessibility user research that we carried
on the News Front Page.
So, the reason why we decided to carry out this research was that we wanted
to focus on the Front Page because it had a new layout, it had new structure.
And so we wanted to ensure that the experience for this page was as good as
we could possibly make it for users, including those using Assistive Technologies,
before we started rolling what we were doing out to more pages, so before
we were moving from the News Front Page onto other pages and other sites
even across the BBC. So, to do this an external agency was commissioned
to undertake this research project, and this user research was lab-based,
which involved getting a set number of participants,
which were members of the public, to give feedback on the site.
What were the main aims for the research;
what did we want to gain out of conducting this.
Initially, we wanted to find out what’s working and what isn’t working really.
What can we improve? What do we think we’re doing well but not quite reaching?
Was the Front Page a good user experience for the users
of Assistive Technology? That’s our main aim firstly.
And secondly, really wanted to use this as an excuse, as a way to increase
awareness and engagement with Accessibility within the department as a whole.
From key stakeholders, senior stakeholders, to enable them to first-hand
see the impact of the decisions that they make,
to see what impact that has on users, all the way to junior team members
getting their first exposure to what Assistive Technology really is
and how this can be used by a wide range of users
that have impairments.
As you can imagine, like setting up some user research might take a bit of time,
and initially, and also to decide on how to do that we need to focus on planning.
This included several parts, which I’ll go through in detail.
For example, what participants did we want to recruit?
What Assistive Technologies did we want tested because we didn’t have time
to go through every single possible technology that exists,
so what do we want to prioritise to use? Thirdly, what content were we exactly
going to focus on? And fourthly, what session structure should we have?
Even thinking more detailed into the time we have with the participant,
how do we want to break up that time to ensure
that it’s focused and we can gain the most from the user
and gain the most useful feedback from them.
And finally, who are we going to invite to observe the session,
because it’s not so useful if you just have like one person
who’s a user researcher observing; you wanted to gain – as I said,
one of our main aims is to get awareness
throughout the whole of the department, so observers are key as well.
Here’s like a screenshot of the News Front Page on a desktop.
This is one of the main things, the main parts that we were testing.
What participant profiles did we care about,
so what kind of participants did we want?
According to the budget and time and focus of the project,
we decided to recruit 12 participants.
Then the research was to be lab-based or a home visit
depending on the need of the particular participant.
We decided to extend it, so usually sessions are 60 minutes,
but to have 90 minutes per person as well.
So, that was to enable people to get set up with their Assistive Tech,
because some of them will bring their own devices,
so they had the software that they needed. To get that set up so it’s all visible,
so we could get cameras filming and interactions, all of that,
so that was the general set up. And also to allow us to look at both desktop
and mobile for those people that use both.
Some people had their mobile device and some people had like a laptop device
that they used at home or a tablet device, things like that.
We wanted to capture feedback from a range of devices
that people were actually using.
All right. So, in addition to that we wanted to ensure that of the participants
we had a range of different impairments,
This was to get as much feedback from people with as many different
perspectives as possible. So, for example, of the 12 there were four
with cognitive disabilities, so, dyslexia and autism, six with vision,
low-vision or blindness, one being hard-of-hearing,
so sign-language is their first language, and one with motor skill issues
who was a Switch user, and so that was the person
that the home visit was conducted with so they would be able
to use the technology that they were using.
A range of different people. Okay, so those were the participants.
So, how did we match those to actually things –
how to recruit them to what kind of Assistive Technologies
would we prioritise for this testing.
On the screen is a screenshot of our Accessibility, like some of the checklists
and user information that Catharine was referring to earlier.
On it, it includes things like ZoomText Magnifier/Reader, Dragon
Naturallyspeaking, JAWS, Read&Write, VoiceOver for iOS, et cetera,
and the different versions that are most commonly used by users
according to the gov.uk survey that we were referring to.
As we talked about the different impairments and now mapping
to Assistive Technologies, we also ensured that we were focused
on perhaps the different software, browsers and platforms.
For example, Dragon Naturallyspeaking Version 13 with IE11.
It’s pretty detailed, you might think, but that’s what we started off with.
Okay. So, that’s the Assistive Technology.
What else did we need to plan everything? So, test pages. Content of what
we were actually going to put in front of users during the session.
Of course, by that stage the page was already live, so we had the live page,
with live News, which is useful, so we weren’t just all relying on mock-ups
or anything. That live page was what we actually started off with,
started off with in a session. However, we have different modes
for the Front Page so when there’s like a breaking News alert,
there’s different layouts that we have and there’s more content shown
and in different structures and things and there might be a live story,
which is updating content. And we can’t exactly predict when; we can’t
schedule News to happen conveniently during our user research session.
We had to create test pages so that we can show
the different modes of our site to our users in the testing.
That’s why we have test pages in addition to our live site.
Okay. So, next in our planning set-up, the session structure,
so what will we focus on during the 90 minutes that we have with the user?
There’s different aspects, all important in their own ways.
It might start obvious, but starting off with an introduction, asking…
a chance to gain a little bit of background information from the person,
the participant and get them set up with Assistive Tech in the room.
After that we go to like free exploration of the page. So, that means
load the page, the Front Page on their device, to see their initial reactions,
just general comments using their primary device, and then also doing that
on their secondary device. So, if their primary device was a phone,
we’d go through that and then if their secondary device was a laptop,
we’d go through that as well. So, that was the free exploration,
and that gave us a bit of an insight to initial impressions and things like that.
Then after that was task-led exploration. So, participants were provided
with tasks on various parts of the page. We had new features that we wanted
to test to make sure how they behaved and how people interacted with them.
For example, selecting “Your Location” to get local News,
so that’s a part that appears on the Front Page.
We also wanted to get feedback on the elements such as navigation,
landmarks, heading structure, layouts, and ease of finding like specific content;
how easy was it for a user to get to content that they wanted to access
and they knew was on the page. So, and then after that task-led exploration,
it’s finally the user researcher who was in the room with the participant
came around and talked to us, the observers, whether they had any questions.
That was very interesting. So, during the course of the majority of that session
then an observer, so anybody in the team really, can also contribute questions,
maybe clarifying questions or any follow-up questions that they felt useful.
So, that’s what the general session structure was.
Yes, and I alluded to this a bit: there are observers, and on the screen,
there’s a photo of the observation room and there’s a one main mirror.
In the other room is the room where there is a desk and a few cameras,
so a mobile device can be held under a camera so we can get the screen
as a user goes through it, and also there’s like connections for laptops,
so they can be live-streamed into the room.
We wanted to get as many observers as possible. The room isn’t that big,
it only holds about eight people.
In an organisation also as large as the BBC,
people [unintelligible] books up pretty quickly.
It was really important to actually get people with enough notice,
to send around communications about the fact that we have
this user research that’s happening. In our case, it was like with email
and Slack to communicate around and to encourage people
from the most senior, I guess, also to the most junior, to sign-up
and get a slot to attend.
And that was pretty valuable and we’ll go through what people’s
feedback was in a little bit. Now to the results of the sessions.
The agency provided us with a detailed report, which is a bit too detailed
to go into right now but we can definitely talk about it later.
For me, I found the types of feedback really interesting,
because I hadn’t really observed this kind of user testing before;
I never really – well, user testing – Accessibility User Research
in this particular way. So, the types of feedback we’re covering
what we were doing well, what was causing difficulty for some, and also
suggested improvements. So, some of those suggested improvements
were the actual users suggesting it; some of them were the actual
user researchers that condensed them, that analysed all of the feedback
from the different participants and suggested a way to overcome
the difficulties of the users. And in this report that the agency provided us
we were able to then take those improvements and use them in future
planning sessions on how we would like to improve the site.
High-level findings include things like – there was just generally
an overall positive experience, which is quite nice to hear
because you’re never quite sure what users might say to you
when they are absolutely able to say anything that they would like
about your product. And they felt that although there was some room
for improvement it was quite more accessible than other websites
that they used in terms of like structure and things, and users loved
the clear, linear layout on mobile.
On mobile – I’ve got two screenshots of parts of the Front Page of News,
and on the mobile screen they can only show
a certain amount of content and there’s one clear stream of content,
so there is not multiple ways of navigating in terms of the order.
For the sighted participants they felt that it was not
very overwhelming, whereas their stop could be,
and they liked everything the mobile users, the same text size
for the most part and image size, or at least consistently.
That makes it easy to take in content.
That was some of the feedback that we got.
And I’d like to say that not everything was all seamless.
Although there was lots and lots of planning and preparations
you really can’t predict everything. So, whatever can go wrong will go wrong,
but hopefully… I’m just going to glance over some of the things
that happened. So, although we set out to get 12 participants,
actually nine out of the 12 weren’t exactly what we recruited for.
For example, in one session almost the entire session had gone by
and the participant didn’t actually demonstrate any need for the,
in particular, Assistive Tech, so the user researcher tried to focus on
and tease out what was happening. And it became apparent that
although they felt that ZoomText, which, as some of you might know
is a particular software, the participant actually thought that
that meant zooming in on the text, and they said: “Yeah, sure,
I zoom in on the text all the time.”
There was really like a bit of a misunderstanding through
the recruitment process really. Yeah, so that happened,
and so that highlighted to us a lot about how much clarity
is really needed and how sometimes,
even though you might plan for every event and challenge,
you really can’t predict everything upfront.
Yes, and moderation. So, moderation might mean the person
in the room with the participant talking them through the session.
What we didn’t note until the sessions were about to start was that
there were several people that the agency had heard
and the Accessibility expert was going to be conducting
half of the sessions and the user researcher,
the main user researcher, the other half, and though the Accessibility
expertchad a really great insight into all of the different
use cases of Assistive Tech and how to get things set up
and the real details of all of that, they didn’t have
the user research expertise, and so that wasn’t particularly great
from a research perspective.
This was a thing that we would like to focus on a little bit in the future
when we’re running future sessions.
So, factors in terms of what might have caused these things to happen.
In organising all of these user research, things took a long time to get –
it took a long time to get all of this set up, mainly difficulties over budget
approval and things like that, which can happen in a large organisation.
We had three separate internal user researchers leading on the project
over like several months and they changed
and the lead researcher in the agency changed during that time,
so there was lots of like re-going over things
and going over planning and things like that. Participant profiles that
we had upfront were too specific, so, as I said,
we had things like particular versions of software.
So, after that they had difficulty recruiting people,
There was pressure from the agency, therefore – well, it became
necessary to relax the constraints for finding participants.
For example, one of the recruitment requirements was
to use Dragon Naturallyspeaking,
so, that’s a software to use voice input and also get output via audio.
But actually the candidate or the participant in the testing
only used Dragon Dictate, which is a dictation software to…
so, it was kind of like a little bit of a confusion in that sense.
That was one of the downsides that we had there.
But although overall, in terms of feedback, there were many
aspects of the research that went really well. We fed back to them
the things that we felt didn’t go so well and they were really
great at taking that on board and accommodating and
a super understanding about it.
And as a result we actually re-ran about seven sessions
with new participants within the same month
and the results fed into the main report that they gave us,
which was really great to have that.
Now, we’re close to the end of the talk, so Catharine and I
would like to talk to you about the observers
that observed the session and what they said about all of this.
Over 40 people came to the lab to observe from most teams
in the department and from most disciplines.
So, from Editor, Product Owner, Business Analyst, User Researcher,
Designer, Developer to Tester. And in addition, for those who couldn’t
observe in person, they watched the livestream in the office in London,
Cardiff and Salford. So, after the research sessions were conducted,
we sent out an online survey in which we asked observers
a few short questions. Were we right, had levels of awareness
increased in the department? Here are a few of the questions
and answers from the survey.
Prior to observing at the lab or the livestream had you witnessed
anyone with an impairment use Assistive Technology before?
And the answer was 55 per cent “No,” and 45 per cent “Yes.”
And Nour, one of our UX Designers said:
“I now understand the pain that users of Assistive Technology
can go through to access content much more.”
Question 2. As a result of observing the research,
do you feel more informed about Assistive Technology
and how someone with an impairment might use this?
10 per cent said: “No”, and a whopping 90 per cent said: “Yes.”
So, Stephen, one of our Developers, he said he “learned a lot,
but not every screen reader user is an expert at using
the Assistive Technology and not every user
will navigate using headers; some may prefer using links
or some other type of element.” – and Shelpa, one of the Testers,
observed that “Assistive Technology has an affordability
and lack of awareness and training that can affect the way
users engage with it.”
Question 3: As a result of observing the user research, do you feel
your level of awareness and engagement with Accessibility improved?
95 per cent of people said: “Yes.” – Takako, our Developer, said that
“it was quite an eye-opening experience and I certainly would do my best
to apply this knowledge I gained in my everyday practice.”
Alison, an Assistant Editor, so on the content creation side of things said:
“I thought it was really interesting to see people with Accessibility issues use
our website in real time and discover the particular problems they face.”
All in all, kind of summing up all of the different aspects that we had in terms
of running our own Accessibility user research that we had come across –
going over things, so (a) Plan – plan which AT you will use,
test pages, use short URL links, there is bit.ly links,
save time like that, session structure, plan your session structure.
Inform everybody, all of the observers to attend and make sure
there’s consistency across different sessions. Secondly,
ensure you have an Accessibility expert leading recruitment
whether that’s a user researcher with in-depth Accessibility knowledge or
Accessibility Specialist as they will have knowledge of the AT specifically.
Thirdly, try to be flexible. For example, just because we tested internally
with the latest version of JAWS doesn’t mean that you can only
have to conduct research from that particular version. You can open up a bit
and give the agency a wider range for them to find participants with.
Then after that, make sure you invite all disciplines to be involved.
So, that includes sending invites well in advance.
If you’re able to, to get the research live-streamed for those
who can’t attend in person. That’s great.
That can also carry a buzz in the office, which it did for us.
And it enables people in different departments and different locations
that can’t get to the research centre to participate. And finally, I think
most importantly to get feedback; feedback from the agency,
feedback from the observers to see what everybody gained
from the sessions and how everybody can be supported
in thinking about Accessibility and accessible websites
throughout their future work as well.
So, thank you for listening.
– Hey, I’m just really, really interested in the point where you effectively
had to redo sessions because the people recruited
were not using the Assistive Technologies the way you wanted them,
effectively they were using Dictate rather than Naturallyspeaking.
Do you know how much money that cost you?
– The agency redid it for free.
– Okay. Do you know how much it cost them?
– We’ll speak about that afterwards if you want.
– It’s a lot. It’s awful. So, I did this sort of thing, I was –
how to use a bit of Accessibility within the BBC 10 years ago.
And we had exactly the same problems there.
It’s really difficult.
There are better ways of doing recruitment with disabled people these days,
so I don’t want to advertise at you now,
but certainly it’s one of the things we found is that you’ve just got to get
that right or it’s just loads of money down the drain.
– On the responses you got from your co-workers,
the very small proportions where it was like 5 per cent “No” –
did you ever go back to them and say: “Hey, how come?”
– Well, some of those people actually were in some of the sessions
where the recruitment didn’t go so well,
so some people actually never observed anyone using Assistive Technology,
unfortunately. So, quite often that was why the number was down.