I started by starting a new Unity project and importing the Oculus SDK (Software Developer Kit). ‘Meta’ (the new owners of Oculus) have been releasing new versions of the SDK regularly and more and more features have been updated. One of the experimental features entails something incredibly similar to what I am achieving with this part of the project in that the player can nominate the boundary of a play room and the things within it:

This is annoying as there was something satisfying about the concept of beating a global conglomerate to the punch, but there is a shining light at the end of this particular tunnel; There is are no games, or support outside of their plans to roll out the ‘Metaverse’. My intension from the beginning was to investigate how turning the home into a play area would change the ways and possibilities of how we can play VR games, so this portion could be a recreation that will act as a foundation for further research.

After setting up the Unity project and Oculus SDK, I thoroughly tested pushing build versions to the headset. This was straight forward and effectively worked out of the box. Conceptually, this Mesh Generator would work by:

The suggested system involved the player basically laying down a: ‘Bottom Corner'(Red), ‘Top Corner'(Green) and have the programming determine the other two corners from these points.

Because the average wall has consistent heights and widths, this method will suit the majority of use cases. I also want to include the ability to manually change corner points, but I will create this functionality later.

I began by implementing a simple Instantiate function that created a gameObject in the same position that the selected controller was located. This gameObject’s location was tied with the selected hand controller and could be ‘dropped’, by severing this tie, when the player was happy with the location. Finally, the ‘LevelManager’ would note down this object in a <List> so it could be referred to in the future. This would act as the “Bottom Corner” of the wall as well as the floor level.

This same function was then modified to implement second gameObject which would basically be the same, except would take the ‘Bottom Corner’ gameObject as a parent when ‘Dropped’. Having this setup would have all the locational data for each wall collated together. Once the player was done with this process, they could be combined into a single mesh.

To create a mesh, I needed to consider this in the same way as making a 3D model in Maya. First: I needed to establish Vertexes. These are the points that a mesh would run to. In the case of a square there are 4:

Unity’s ‘MeshFilter’ component contains a ‘Vertex’ array that allows us to implement a list of Vector (locational) world points. As mentioned above: I have two points of reference, but need four.

  • Point ‘1’ would be our ‘Bottom Corner’.
  • Point ‘3’ would be our ‘Top Corner’.
  • Point ‘2’ would be the ‘Y’ coordinate of point ‘1’ with the ‘X’ coordinate of point ‘3’.
  • point ‘4’ would be the ‘Y’ coordinate of point ‘3’, with the ‘X’ coordinate of point ‘1’.

We also need to consider the triangles as game engines like Unity and Unreal require triangular meshes. Normally they would automatically convert them, however as we are building from scratch, we need to think about how to how the above would look in a triangular configuration.

Pretty simple stuff. In this case our square’s triangle points would refer to the list of vertex points:

  • Triangle 1: 1,2,3
  • Triangle 2: 1,3,4

With these attached to each of the wall iterations and the corner locations established, we have ourselves a wall:

From here, the script runs a ‘rinse and repeat’ loop that takes in the maximum height from the ‘Top Corner’ point, the floor height from the ‘Bottom Corner’ point and finally the ‘X’ and ‘Z’ position from a point (created by the player) for the edge of each next wall.

Finally, I implemented an ‘Auto Complete’ function that would:

  1. Take in the last positional point entered.
  2. Take in the first positional point entered.
  3. Instantiate and position a point at the same place as the last point entered, modifying the height to match the floor height.
  4. Instantiate and position a point at the same place as the first point entered, modifying the height to match the ceiling height.

Voila! A room!

With a couple of extra functions, the player will be able to walk around the house and map the walls to a fairly similar degree to the experimental experience that ‘Meta’ released. In order to make this happen I will need to investigate and implement:

  1. A Passthrough Layer: This will show the room around the player, using the Quest 2’s cameras and not only allow them to see where they are stepping, but also allow them to locate the corners of the walls and map them in properly.
  2. Mesh Combining: Right now all the walls are separate meshes. This is by design as it has allowed me to quickly prototype the concept and gain an understanding of the general idea, practicality and achievability of the initial concept. Combing the meshes and discarding individuals will:
    1. Give us a fully joint mesh to work with.
    2. Eliminate the current double ups in vertices.
    3. Allow individual vertices to be selected and moved.

3. Mesh Saving: The intention is to take this mesh and use it as a basis for Game Design techniques like procedural generation and non-Euclidian space generation. Having a save feature will allow:

  • Easily transfer of pre-made playspace meshes through Unity scenes.
  • Playspaces to be made once and then reused.

4. A tutorial that explains all of the setup process as the user is going. This will be more beneficial to myself as a reminder of how to work my own program, as the study itself will be around Game Design for games that utilise this space.