Categories
VR WordPress

(Virtual) Hallway Track

One of the many takeaways I got from Matt Mullenweg’s Distributed podcast (a show about the future of work) was the importance of high fidelity communication.

Matt talks about the huge difference between written communication and voice chat. There’s another jump up from audio-only to video. Each step up in fidelity creates more bandwidth for communication.

You can get so much from someone’s tone of voice, their approach, etc. It’s really much higher bandwidth [than text].

Matt Mullenweg

It’s obvious, right? We all know that words lack tone and inflection. Voice lacks eye contact and gestures.

What about video? Video lacks group dynamics, and an ineffable quality of presence.

That’s why I’m so interested in the tech that creates the next leap in fidelity: Virtual Reality.

In September 2018 I did a fun experiment. I joined a Zoom call from VR. As a virtual avatar, in a virtual environment.

Joining a Zoom call from BigscreenVR. Pictured is the virtual selfie-camera which I used as a webcam, and the giant cinema screen where I projected the call.

Of course, this didn’t help improve fidelity, but I’ll never forget something that Brendan (top left) said: I can tell that it’s you just by the way you move your head and hands.

Brendan and I have spent a lot of time together in person – this wasn’t something obvious to other participants on the call. But it made me wonder, what would it be like for everyone on the call to be joining me in VR? Would we learn what it felt like to just be present with each other?

Since then, I’ve been involved in a bunch of VR-only meetings, and can attest that the fidelity of communication, while not quite as good as in-person, far outshines a video call.

Aside from a sense of presence, and aside from the dynamics of being able to move between conversations, being in VR also allows you to experience together.

For example, I have a weekly one-on-one with an old mate who I don’t otherwise get to see often. We hang out, chat about work, kids, tech, hobbies, all while floating through an abandoned space station or floating between rapids. Shooting zombies or shooting hoops. The crazy part is – those memories feel as real as the times we’ve spent together in person.


Recently I had to cancel a trip to Thailand because WordCamp Asia was cancelled due to COVID-19 concerns. At the time I was somewhat critical of the decision to cancel, but turns out I was wrong. Events all over the world are being cancelled. Travel to meet in person is going to be really hard in 2020.

What a good opportunity this could be, then, to start experimenting with meeting in VR? What if we had a WordCamp VR, or a WordPress VR Meetup?

While the main presentation or talk tracks are a big part of WordCamp attendance, my favourite part has always been the “Hallway Track” – meeting new faces, old friends, colleagues, competitors, contributors, and more. This is where VR works beautifully in place of traditional video, which doesn’t allow for side-chats, mingling, or multiple conversations at once.

Here’s me in a Mozilla Hubs room I made. You can create your own environments, complete with scrollable keynote presentation, video embeds, and Wapuus! In this open-roofed structure, there are even literal hallways!

The tools exist. I’ve seen conferences held in Altspace, Engage, Rumii and others. Even Bigscreen would make a great destination for a WordPress meetup. Maybe Mozilla Hubs, which is web-based and open source, would be a perfect cultural fit for a WordPress Event.


I’ve been involved in the VR space since 2017, and I really believe that now is the right time to start a virtual reality meetup for WordPress. I’d love to hear your thoughts on the idea.

And please, if you’re interested in helping out, let me know!

Categories
Future VR Work

Joining a video chat from VR

My team at work has a weekly call, codenamed “Strategy Sync”, where we chit-chat and play a few rounds of Rocket League. This week, I joined the call from the metaverse via Bigscreen, which allowed me to beam my colleagues onto a giant screen in my virtual lounge room.

The best part was, they could see me, too! Well, my virtual avatar anyway. Here’s how I set it up.

What you’ll need

How to use Bigscreen camera as a webcam

  1. Install OBS Studio and OBS-VirtualCam
  2. Launch Bigscreen
  3. In the Bigscreen menu, select Tools > Camera > Capture Mode
  4. Launch OBS Studio, and select Tools > Virtual Cam
  5. Turn on Horizontal Flip, and press Start
  6. Launch your video chat application (I used Zoom), and select OBS as your webcam

Back in Virtual Reality, you should see your Bigscreen environment, and a selfie stick camera. This camera is now be your virtual web cam!

Categories
Design VR

Experience Design for the Blind

When creating VR games and experiences, we should design them so that they can be used by the blind.

VR for blind people may seem counterintuitive, but if you think it through it makes a lot of sense. The thought occurred to me after visiting the Notes on Blindness VR experience, followed by something Lucas Rizzotto said on the Research VR Podcast.

I hate buttons, and I hate two-dimensional interfaces… a Like button is exactly not the way to do it.

Lucas Rizzotto

It’s often said that VR is a visual medium, but with properly implemented spacial audio, VR can be an auditory medium too. There’s no reason why someone who experiences actual reality without any visual information couldn’t do the same in a virtual environment.

Thinking about our virtual worlds in this way also helps us to imagine interaction paradigms that fit better in a 3D space. For example, if 2D menus are out – what creative possibilities exist to replace them?

Some other examples:

  • If the user has no visual information to understand their position in the world, what audio cues can I provide?
  • Maybe there should be a lake rippling to one side, and the wind rustling the leaves in the trees behind?
  • How do I represent locomotion and movement with sound?
  • Are there any sounds reflecting the players status (health, stamina, or effects)?
  • How can I precisely position an obstacle or goal with audio cues?

Of course, thinking about our virtual worlds in this way will profoundly increase immersion for everyone.

If you enjoyed this post, please consider adding this website to your bookmarks or favourites. Bookmarks are easy, free, private, and require no special software.

Categories
Experiment VR

Planet Defence

WebVR experiment #2 with A-Frame

Arrow keys to move the turret, space to shoot. Save the planet (it’s behind you).

Debrief

This project turned out to be much harder than I anticipated!

The turret charging its laser…

One of the first issues I ran into was rotating the turret. Because of the shape of the model, the rotation point was totally off, and there’s not way that I can tell in A-Frame or three.js to fix this.

What I ended up doing was pretty neat, I created a box, and placed it at the rotation point that I wanted. Then I made the turret model a child of that box, and positioned it relative to it’s parent. That way I can apply rotation to the box and the turret rotates from this point too.

There were lots of animations happening here. The turret needs to rotate, the beam grows in scale and position, the light surrounding the beam grows in intensity, and finally, the beam shoots off into the distance. I found that using A-Frame’s <a-animation> was messy and unwieldy. In my last experiment, I found myself having to clean up the DOM once the animation had completed. Instead, I opted to use TWEEN, which is part of three.js, and hence part of A-Frame.

Another issue I ran into was positioning the beam. There are two states for the beam: loading and firing. When it’s loading, it really needs to be a child of the turret, so that it can be positioned and animated in exactly the right place, and continue to move with the turret, before it’s fired. However, after it’s fired, it should not be linked to or follow the turret rotation in any way.

To solve this, I use two different beams. The loading beam is positioned as a child of the turret. When it’s ready to fire, I need it’s position and rotation, so I can apply that to the second “firing” beam. The problem here is that the “loading” beam’s position is relative to it’s parent.

To solve this, I was able to grab it’s world position by creating a new THREE.Vector3, and use the setFromMatrixPosition method with the “loading” beam’s beam.object3D.matrixWorld property. I then apply the world position to the “firing” beam, as well as the rotation of the turret.

Once the firing beam was in place, I had a lot of difficulty with the released beam actually firing. TWEEN uses the variables as they were set when defining the tween, not as they are set when the tween starts. Even changing the value of a variable during a tween’s onStart method won’t have any effect on the value during onUpdate.

In the end I resolved this by calculating the position (end position and current position as a percentage between start and end) during the onUpdate method, which isn’t an optimal use of resources, but the best I could manage.

The next major challenge I faced was figuring out the end point that I wanted the beam to fire to. It’s no good just animating the beam’s position.z, because this doesn’t take into account rotation (z is always in the same place, no matter where the turret is pointing).

After looking into some complicated solutions (such as creating a new Matrix4 with the turret’s quaternion, and translating the z position of the matrix) I finally discovered three.js’s very handy translateZ method, which did all the heavy lifting!

To-do

  • Sounds
  • Add controller support for moving and firing the turret
  • Add enemy spacecraft, flying toward the planet for you to shoot at
  • Add collision between beam and enemies
  • Explosions
Categories
Experiment VR

Drone Attack

WebVR experiment #1 with A-Frame

WASD to move around. Look at the drone to fire your laser at them.

Debrief

This was a fun first project! I ran into some interesting problems along the way, but mostly things went pretty smoothly.

Shooting at drones is much less violent than at humans.

A lot of the fun for me on this project has been playing with lights and sound. When the laser is activated, it moves a red spot light on the target.

The positioning of the sounds add a lot to the scene, and are super easy in A-Frame – I just made each sound a child of the element they were emitting from. You'll notice as you walk close to the drones they become louder, and the same is true for the sparking sound, while the laser sound emits from the camera so it's always the same volume.

I ran into lots of trouble with the particle component (the sparks) – it wasn't playing nicely with the environment component. It took my a while, but I eventually tracked it down to this bug, which I resolved (at least for now) by removing fog from the environment.

The position of the laser was another difficult aspect. It took my a while to realise that if I matched the start point with the camera position, I would be looking directly down the line, and thus unable to see it!

I'm not quite happy with the single pixel width line. Of course, I could use a cylinder, but shapes like that are generated with a width, height, depth, rotation, and position, as opposed to my ideal case: start, end, and diameter.

Another problem is that the start and end positions can change while the laser line is visible (if the camera or drone moves). I could lock the laser to the camera by making it a child of the camera, but there would be now way of locking it at on the drone end (plus I would have to deal with converting the world position of the of the drone to the relative position of the line in the camera).

So, rather than do that, I opted for the more resource intensive method of reapplying the start and end position of the laser line on every tick. In hindsight, this is far from ideal, and the likely cause of memory crashes (especially on mobile).

I did experiment with a Curve component, which allowed my to create a curve at a start and end position, and draw a shape along that curve (I used a repeating cylinder). Unfortunately, working with this component in every single tick was far too slow.

What I'd like to try next is drawing the laser as a child of the target (so that it moves with the target), and if the camera moves, just turn the laser off until a new click event occurs.

To-do

  • Resolve memory leak
  • Drones explode after x seconds of being hit by laser
  • Scores
  • Timer
  • Start / Restart