The practical side of this project can be summed up pretty easily; Previously VR headsets have had movement limitations posed on them due to technological and play space limitations. With the introduction of the wireless headset (Oculus/Meta Quest 2), we now have the potential to free roam with a headset. This is not to say that there will be limitations, but we have the opportunity to explore how opening up VR to free roaming movement can potentially change the way that we play games.

As far as the ability to integrate real world limitations into a virtualised environment goes; I have come across a couple of other projects that achieve this in a similar fashion to what I am wanting to achieve:

ICONVR: has a game which is based entirely in an open world scenario where place space size is not really an issue. In their game the real world space only has to reflect the physical size of the play area and doesn’t cater for any obstacles that exist within there. Effectively it is the same playspace system that exists now, just on a larger scale.

Custom Home Mapper: Is along the lines of what I would like to achieve, however has been created more as a novelty software that capitalises on the free roaming potentials of VR and thinks about the gaming experience as a functionality afterthought by touching on fairly simple possibilities. As first step in the direction of free-roaming VR, this is the pinnacle of current usages and will be a project that I consider, on a technical front, as a framework to help me think about game design integration.

When first considering this topic, I was of the understanding that the Quest 2 had a form of depth sensor that was available to use for developers. Unfortunately this is not the case as Oculus/Meta have decided to keep this functionality under wraps and inaccessible, due to privacy reasons, and so I will be having to think about the best ways to establish depth perception in an algorithmic sense, or how I would get around it. Currently my pipe line looks something like below:

  1. Setup the Oculus dev kit in Unity; Self explanatory. This means I will be able to interface the headset through game engine and deploy test versions. By the end of this step, I should be able to click ‘Build and Run’ and have a version deployed to the Oculus Headset. The resources on how to do this can be found:
  2. Enable Passthrough Layer transitioning; The players will need to be able to see the room around them, in order to allocate wall and obstacle locations. By the end of this step, the player should be able to see through the cameras on the headset.
  3. Dictate Positions; The player will be mapping out the room at this point. The objective of this is to make it as easy as possible and so they will need to be able to place points in a room that are then created into a mesh, which will act as the ‘Play Space’. The tutorial here: will be an interesting start, although I believe I will need to get a bit creative with mesh mapping. . From here they should be able to trace the entire inside of their house and potentially even determine if a room they are currently in is different from the previous one.
  4. Saved Mesh; I don’t want players to have to do this every single time, so saving the mesh is a must. It should be pointed out to them that some items may have moved since they blocked out the obstacles. In this event, Step 4 should be accessible.

This is about all that I imagine needing out of the mesh builder part of the project. Some handy ideas potentially could be;

  1. Room prediction; Generally rooms consist of ceilings, walls and floors being perpendicular to each other. This could have the user select a bottom corner, a top corner and then just the positioning of each end of the wall.
  2. Singular Point Changing; Allow the player to select one point and have the walls change with the point location.