We believe that if Tide can encourage and guide WordPress Developers to write better code, every WordPress site (and by extension, the entire web) will be better. That’s how Tide got its name: A rising tide lifts all boats.
As an advocate for open source, the open web, and privacy, my choice of browser is a critical part of my work. Some thoughts:
The only true open source browser. If I have a problem, I can file a bug on Bugzilla, and submit a patch to fix it. More importantly, if I’m ever curious about how the browser works under the hood, I can check the source.
For me, this makes Firefox an extremely persuasive option, and is the reason I try it out from time to time. Sadly, I’ve not yet been able to stick with it, and here’s why:
I am in the Mac / iOS ecosystem, and although Safari may lack in some features, it makes up for it in others:
Best battery and resource efficiency (in my experience).
At least the browser engine (Webkit) is open source.
For these reasons, Safari has been my primary browser for many years now.
Due to privacy concerns with Google’s ad-based business model, I try not to use any Google services. When I do (YouTube), I make sure to block all cookies and browser storage for those domains (which I can do with 1Blocker).
Chrome’s deep Google integration makes it a non-starter for me. Most of Chrome’s benefit’s can be summed up as “better integration with Google products”, which doesn’t help me at all.
I don’t have a personal Google account, but I do need one for work. For those times, I have my default browser set to:
My system default browser is set to Choosy – a System Preferences extension that smartly handles which browser to open depending on the URL.
I use Choosy to set up rules for the services that either require my work’s Google account, or simply don’t work well in Safari. This way, when a colleague drops me a Google Doc link to review, it automatically opens Google Chrome, which is signed into my work Google account.
This way, the vast majority of my web browsing happens without being logged into Google. Since 1Blocker is also blocking Google’s cookies and analytics, my browsing activity gets to remain private.
Although supporting an open source browser would be my ideal, my daily driver is Safari, due to its native integrations with Mac and iOS. I also use Choosy for the times when I need to be signed into my Google account (in which case I use Chrome, sparingly).
If you enjoyed this post, please consider adding this website to your bookmarks or favourites. Bookmarks are easy, free, private, and require no special software.
Arrow keys to move the turret, space to shoot. Save the planet (it’s behind you).
This project turned out to be much harder than I anticipated!
One of the first issues I ran into was rotating the turret. Because of the shape of the model, the rotation point was totally off, and there’s not way that I can tell in A-Frame or three.js to fix this.
What I ended up doing was pretty neat, I created a box, and placed it at the rotation point that I wanted. Then I made the turret model a child of that box, and positioned it relative to it’s parent. That way I can apply rotation to the box and the turret rotates from this point too.
There were lots of animations happening here. The turret needs to rotate, the beam grows in scale and position, the light surrounding the beam grows in intensity, and finally, the beam shoots off into the distance. I found that using A-Frame’s <a-animation> was messy and unwieldy. In my last experiment, I found myself having to clean up the DOM once the animation had completed. Instead, I opted to use TWEEN, which is part of three.js, and hence part of A-Frame.
Another issue I ran into was positioning the beam. There are two states for the beam: loading and firing. When it’s loading, it really needs to be a child of the turret, so that it can be positioned and animated in exactly the right place, and continue to move with the turret, before it’s fired. However, after it’s fired, it should not be linked to or follow the turret rotation in any way.
To solve this, I use two different beams. The loading beam is positioned as a child of the turret. When it’s ready to fire, I need it’s position and rotation, so I can apply that to the second “firing” beam. The problem here is that the “loading” beam’s position is relative to it’s parent.
To solve this, I was able to grab it’s world position by creating a new THREE.Vector3, and use the setFromMatrixPosition method with the “loading” beam’s beam.object3D.matrixWorld property. I then apply the world position to the “firing” beam, as well as the rotation of the turret.
Once the firing beam was in place, I had a lot of difficulty with the released beam actually firing. TWEEN uses the variables as they were set when defining the tween, not as they are set when the tween starts. Even changing the value of a variable during a tween’s onStart method won’t have any effect on the value during onUpdate.
In the end I resolved this by calculating the position (end position and current position as a percentage between start and end) during the onUpdate method, which isn’t an optimal use of resources, but the best I could manage.
The next major challenge I faced was figuring out the end point that I wanted the beam to fire to. It’s no good just animating the beam’s position.z, because this doesn’t take into account rotation (z is always in the same place, no matter where the turret is pointing).
After looking into some complicated solutions (such as creating a new Matrix4 with the turret’s quaternion, and translating the z position of the matrix) I finally discovered three.js’s very handy translateZ method, which did all the heavy lifting!
Add controller support for moving and firing the turret
Add enemy spacecraft, flying toward the planet for you to shoot at
WASD to move around. Look at the drone to fire your laser at them.
This was a fun first project! I ran into some interesting problems along the way, but mostly things went pretty smoothly.
A lot of the fun for me on this project has been playing with lights and sound. When the laser is activated, it moves a red spot light on the target.
The positioning of the sounds add a lot to the scene, and are super easy in A-Frame – I just made each sound a child of the element they were emitting from. You'll notice as you walk close to the drones they become louder, and the same is true for the sparking sound, while the laser sound emits from the camera so it's always the same volume.
I ran into lots of trouble with the particle component (the sparks) – it wasn't playing nicely with the environment component. It took my a while, but I eventually tracked it down to this bug, which I resolved (at least for now) by removing fog from the environment.
The position of the laser was another difficult aspect. It took my a while to realise that if I matched the start point with the camera position, I would be looking directly down the line, and thus unable to see it!
I'm not quite happy with the single pixel width line. Of course, I could use a cylinder, but shapes like that are generated with a width, height, depth, rotation, and position, as opposed to my ideal case: start, end, and diameter.
Another problem is that the start and end positions can change while the laser line is visible (if the camera or drone moves). I could lock the laser to the camera by making it a child of the camera, but there would be now way of locking it at on the drone end (plus I would have to deal with converting the world position of the of the drone to the relative position of the line in the camera).
So, rather than do that, I opted for the more resource intensive method of reapplying the start and end position of the laser line on every tick. In hindsight, this is far from ideal, and the likely cause of memory crashes (especially on mobile).
I did experiment with a Curve component, which allowed my to create a curve at a start and end position, and draw a shape along that curve (I used a repeating cylinder). Unfortunately, working with this component in every single tick was far too slow.
What I'd like to try next is drawing the laser as a child of the target (so that it moves with the target), and if the camera moves, just turn the laser off until a new click event occurs.
Resolve memory leak
Drones explode after x seconds of being hit by laser
There's a lot of information out there about designing the web for accessibility. Over the last 10 years, the world has learned a lot about how to accommodate a vast diversity of different needs and abilities.
So, when it comes to designing virtual experiences, we don't need to start from scratch. Here's a few things you should consider from a VR Accessibility point of view.
Like responsive web design, which accounts for the screen size of the user, VR experiences should be responsive to height and posture. Consider audiences who use wheelchairs, are unable to stand for long, or are short (including children).
Don't place buttons or interactive elements in hard to reach positions
Ensure that NPC eye-gaze includes height, not just direction
Include locomotion options which can be done from a seated position
VR experiences shouldn't rely solely on audio cues to grab a user's attention, or require audio in order to complete a task. Not only does this allow your experience to be accessible by hearing impaired people, it also let's users play on mute if they want to.
Provide a subtitles option for speech
Provide visual indicators where positional audio is used
Provide visual cues for success or failure events (e.g. failure to start an engine should be visible as well as audible)
Current hardware resolutions make it difficult enough to discern text as it is. But as the resolution of VR headsets increases, don't be tempted to reduce font sizes to match resolution. Consider users with a vision impairment that may not be able to make out small text.
Recommended font size is 3.45°, or a height of 6.04cm from the distance of one meter
The text should be faced perpendicularly to the observer, and rotate around the observer's view
Use a line length of 20—40 symbols per line. With bigger fonts, lines should be shorter.
Color blindness or color vision deficiency (CVD) affects around 1 in 12 men and 1 in 200 women worldwide. This means that for every 100 users that visit your website or app, up to 8 people could actually experience the content much differently that you’d expect.
Virtual worlds tend to be visually rich, which often means a much broader range of colour than one would typically see on a website, or even in the real world. With this in mind, strive to ensure that colour isn't used a the only way of displaying contrast.
Choose a variety of textures to provide better contrast between visual elements
Use both colours and symbols in interface design
Avoid "bad" colour combos (e.g. Green + Red, or Blue + Purple) for primary elements
VR is still in it's infancy – we're still learning about what works and what doesn't. As the industry grows, it's important to continue prioritising accessibility, so that our virtual stories, games, environments, and tools can be explored by everyone.
When the internet was popularised, the world started asking questions about topics like freedom of access to information, the nature of "digital goods" / "digital ownership", and globalisation.
Now that we're on the cusp of the next technology revolution, I'm excited about more compelling questions entering the zeitgeist.
Virtual Reality is currently being pushed forward by a gaming market (like computing once was). But it won't be long (2019, 2020) before consumer VR starts making its way into every home. All we need is:
Standalone headsets (no computer required, coming 2018—2019)
Faster wireless communication (5G, coming 2020—2021)
Cloud rendering (2018—2019)
Once VR has become popularised, the global conversation will shift to focus on some very interesting questions. What is the nature of reality? Do quantum mechanics prove that we're already living in a simulation? Is consciousness emergent from biology, or something deeper? Does "self" even exist?
At the same time, I hope we see this technology creating new social dynamics, forming new partnerships and friendships, in ways that flat screen communication has never been capable of.
I don't think we'll recognise the world 5 years from now.
Postscript: One of my earliest VR experiences was sitting in AltSpace and meeting a Rabbi. We discussed the future Halakha (Jewish law) of VR for hours. Is flying around Google Earth considering "travelling" on the Sabbath? Should my avatar wear a kippah? Is my avatar Jewish? Is it permitted to eat virtual pork? These are questions that will have real authoritative answers in the near future.
Here's a productivity tip I've rediscovered, straight from 2003: Browser Bookmarks.
Since I left social media, I found it hard to keep track of four things:
News (from sources I care about)
Blogs (written thoughtfully and regularly)
Photos (from family and friends)
Videos (information and entertainment from sources I trust)
My first inclination was to turn to RSS – the ancient XML format that kept everyone up to date in the 2000s. But RSS is dying, and I believe blogs should be read in the context of their site (design).
Instead, I started using the browser feature that's been since Netscape: ⭐️ Bookmarks! I created a folder for each of those four categories, and whenever the mood strikes me, I just right click and choose "Open in New Tabs".
News and Blogs are self explanatory, but it bears stating that you can still follow Instagrammers and YouTube channels without an account on their service.
You can view any public instagram account online by visiting https://instagram.com/[username].
For YouTube, I like to visit the channel page, then click on the Videos tab, and bookmark that. This way I'm always seeing a list of the latest videos from that channel. I also use 1Blocker to block cookies from Youtube (so that the videos I watch don't result in "Recommendations").
I hope you'll consider hitting Command+D (or Ctrl+D), and visiting this blog again soon.
Inspired by Seth Godin, I recently attempted a daily writing project. I committed to write one blog post every day, indefinitely.
Here are my reflections.
Writing takes time. Not the actual typing – that part is easy. But finding inspiration everyday is a serious commitment. It can take hours, and it can't be forced.
Sometimes opening yourself up to inspiration means sitting in a café reading a magazine, or going for a stroll through the park, or reading a book. Let's be real: I have a family and a job, I don't have time to wistfully wait in the bath for my eureka! moment every single day.
After a few months, I gave up. And when I gave up… I really gave up. I didn't write again until… well, now.
I've realised that, at least for me (and maybe for you, too?), trying to force a daily routine isn't the best way of falling in love with a habit or practice. I advocate for a different approach. Let's call it…
No Pressure Weekday Habits
I'll illustrate this habit-building technique with an example: Meditation. I love meditation, but I haven't always. At first, I only loved the idea of meditation, the practice took some getting used to.
All the books I read told me that it was vital that I meditate every single day for the first 3 months (a common trope among daily habit pushers). Other books told me to start with just 5 minutes a day (or write only 1–2 sentences, or run for only 1km).
That wasn't working. So instead, I decided to commit to the following:
Meditate for at least 30 minutes, but only on weekdays, and only if I feel like it.
In the end, I found that my intuition here worked wonderfully. It was the pressure of not missing a day which caused me to give up. It was the triviality of "small habits", that caused me to give it away. Now, I often happily meditate for 20—30 minutes, and I do so most days.
So, back to writing.
After a few days of writing every day, I started feeling stressed, worried, and overworked. Worse – the short posts were often uninspired or forced. That's not the sort of writer I want to be.
Instead, I'll be the writer who taps out a decent chunk of valuable content every single day.