Experience Design for the Blind

When creating VR games and experiences, we should design them so that they can be used by the blind.

VR for blind people may seem counterintuitive, but if you think it through it makes a lot of sense. The thought occurred to me after visiting the Notes on Blindness VR experience, followed by something Lucas Rizzotto said on the Research VR Podcast.

I hate buttons, and I hate two-dimensional interfaces… a Like button is exactly not the way to do it.

Lucas Rizzotto

It’s often said that VR is a visual medium, but with properly implemented spacial audio, VR can be an auditory medium too. There’s no reason why someone who experiences actual reality without any visual information couldn’t do the same in a virtual environment.

Thinking about our virtual worlds in this way also helps us to imagine interaction paradigms that fit better in a 3D space. For example, if 2D menus are out – what creative possibilities exist to replace them?

Some other examples:

  • If the user has no visual information to understand their position in the world, what audio cues can I provide?
  • Maybe there should be a lake rippling to one side, and the wind rustling the leaves in the trees behind?
  • How do I represent locomotion and movement with sound?
  • Are there any sounds reflecting the players status (health, stamina, or effects)?
  • How can I precisely position an obstacle or goal with audio cues?

Of course, thinking about our virtual worlds in this way will profoundly increase immersion for everyone.

If you enjoyed this post, please consider adding this website to your bookmarks or favourites. Bookmarks are easy, free, private, and require no special software.

Planet Defence

WebVR experiment #2 with A-Frame

Arrow keys to move the turret, space to shoot. Save the planet (it’s behind you).

Debrief

This project turned out to be much harder than I anticipated!

The turret charging its laser…

One of the first issues I ran into was rotating the turret. Because of the shape of the model, the rotation point was totally off, and there’s not way that I can tell in A-Frame or three.js to fix this.

What I ended up doing was pretty neat, I created a box, and placed it at the rotation point that I wanted. Then I made the turret model a child of that box, and positioned it relative to it’s parent. That way I can apply rotation to the box and the turret rotates from this point too.

There were lots of animations happening here. The turret needs to rotate, the beam grows in scale and position, the light surrounding the beam grows in intensity, and finally, the beam shoots off into the distance. I found that using A-Frame’s <a-animation> was messy and unwieldy. In my last experiment, I found myself having to clean up the DOM once the animation had completed. Instead, I opted to use TWEEN, which is part of three.js, and hence part of A-Frame.

Another issue I ran into was positioning the beam. There are two states for the beam: loading and firing. When it’s loading, it really needs to be a child of the turret, so that it can be positioned and animated in exactly the right place, and continue to move with the turret, before it’s fired. However, after it’s fired, it should not be linked to or follow the turret rotation in any way.

To solve this, I use two different beams. The loading beam is positioned as a child of the turret. When it’s ready to fire, I need it’s position and rotation, so I can apply that to the second “firing” beam. The problem here is that the “loading” beam’s position is relative to it’s parent.

To solve this, I was able to grab it’s world position by creating a new THREE.Vector3, and use the setFromMatrixPosition method with the “loading” beam’s beam.object3D.matrixWorld property. I then apply the world position to the “firing” beam, as well as the rotation of the turret.

Once the firing beam was in place, I had a lot of difficulty with the released beam actually firing. TWEEN uses the variables as they were set when defining the tween, not as they are set when the tween starts. Even changing the value of a variable during a tween’s onStart method won’t have any effect on the value during onUpdate.

In the end I resolved this by calculating the position (end position and current position as a percentage between start and end) during the onUpdate method, which isn’t an optimal use of resources, but the best I could manage.

The next major challenge I faced was figuring out the end point that I wanted the beam to fire to. It’s no good just animating the beam’s position.z, because this doesn’t take into account rotation (z is always in the same place, no matter where the turret is pointing).

After looking into some complicated solutions (such as creating a new Matrix4 with the turret’s quaternion, and translating the z position of the matrix) I finally discovered three.js’s very handy translateZ method, which did all the heavy lifting!

To-do

  • Sounds
  • Add controller support for moving and firing the turret
  • Add enemy spacecraft, flying toward the planet for you to shoot at
  • Add collision between beam and enemies
  • Explosions

Drone Attack

WebVR experiment #1 with A-Frame

WASD to move around. Look at the drone to fire your laser at them.

Debrief

This was a fun first project! I ran into some interesting problems along the way, but mostly things went pretty smoothly.

Shooting at drones is much less violent than at humans.

A lot of the fun for me on this project has been playing with lights and sound. When the laser is activated, it moves a red spot light on the target.

The positioning of the sounds add a lot to the scene, and are super easy in A-Frame – I just made each sound a child of the element they were emitting from. You'll notice as you walk close to the drones they become louder, and the same is true for the sparking sound, while the laser sound emits from the camera so it's always the same volume.

I ran into lots of trouble with the particle component (the sparks) – it wasn't playing nicely with the environment component. It took my a while, but I eventually tracked it down to this bug, which I resolved (at least for now) by removing fog from the environment.

The position of the laser was another difficult aspect. It took my a while to realise that if I matched the start point with the camera position, I would be looking directly down the line, and thus unable to see it!

I'm not quite happy with the single pixel width line. Of course, I could use a cylinder, but shapes like that are generated with a width, height, depth, rotation, and position, as opposed to my ideal case: start, end, and diameter.

Another problem is that the start and end positions can change while the laser line is visible (if the camera or drone moves). I could lock the laser to the camera by making it a child of the camera, but there would be now way of locking it at on the drone end (plus I would have to deal with converting the world position of the of the drone to the relative position of the line in the camera).

So, rather than do that, I opted for the more resource intensive method of reapplying the start and end position of the laser line on every tick. In hindsight, this is far from ideal, and the likely cause of memory crashes (especially on mobile).

I did experiment with a Curve component, which allowed my to create a curve at a start and end position, and draw a shape along that curve (I used a repeating cylinder). Unfortunately, working with this component in every single tick was far too slow.

What I'd like to try next is drawing the laser as a child of the target (so that it moves with the target), and if the camera moves, just turn the laser off until a new click event occurs.

To-do

  • Resolve memory leak
  • Drones explode after x seconds of being hit by laser
  • Scores
  • Timer
  • Start / Restart

Virtual Reality Accessibility

There's a lot of information out there about designing the web for accessibility. Over the last 10 years, the world has learned a lot about how to accommodate a vast diversity of different needs and abilities.

So, when it comes to designing virtual experiences, we don't need to start from scratch. Here's a few things you should consider from a VR Accessibility point of view.

Seated-first design

Like responsive web design, which accounts for the screen size of the user, VR experiences should be responsive to height and posture. Consider audiences who use wheelchairs, are unable to stand for long, or are short (including children).

  • Don't place buttons or interactive elements in hard to reach positions
  • Ensure that NPC eye-gaze includes height, not just direction
  • Include locomotion options which can be done from a seated position

Audio cues

VR experiences shouldn't rely solely on audio cues to grab a user's attention, or require audio in order to complete a task. Not only does this allow your experience to be accessible by hearing impaired people, it also let's users play on mute if they want to.

  • Provide a subtitles option for speech
  • Provide visual indicators where positional audio is used
  • Provide visual cues for success or failure events (e.g. failure to start an engine should be visible as well as audible)

Readability

Current hardware resolutions make it difficult enough to discern text as it is. But as the resolution of VR headsets increases, don't be tempted to reduce font sizes to match resolution. Consider users with a vision impairment that may not be able to make out small text.

  • Recommended font size is 3.45°, or a height of 6.04cm from the distance of one meter
  • The text should be faced perpendicularly to the observer, and rotate around the observer's view
  • Use a line length of 20—40 symbols per line. With bigger fonts, lines should be shorter.

More information on fonts in VR can be found in Volodymyr Kurbatov's article.

Colour

Color blindness or color vision deficiency (CVD) affects around 1 in 12 men and 1 in 200 women worldwide. This means that for every 100 users that visit your website or app, up to 8 people could actually experience the content much differently that you’d expect.

http://blog.usabilla.com/how-to-design-for-color-blindness/

Virtual worlds tend to be visually rich, which often means a much broader range of colour than one would typically see on a website, or even in the real world. With this in mind, strive to ensure that colour isn't used a the only way of displaying contrast.

  • Choose a variety of textures to provide better contrast between visual elements
  • Use both colours and symbols in interface design
  • Avoid "bad" colour combos (e.g. Green + Red, or Blue + Purple) for primary elements

VR is still in it's infancy – we're still learning about what works and what doesn't. As the industry grows, it's important to continue prioritising accessibility, so that our virtual stories, games, environments, and tools can be explored by everyone.