Over this passed week I have spent some more time working on the unity environment for my VR experience by looking into models and assets that are necessary in my script.
Adding Level Assets and 3D models
I have previously added some models for the third puzzle which are some suitcase/luggage models but for the start of the experience there needs to be four objects that the player can choose from. These objects are a Book, Gun, Watch and a Wallet and using Turbosquid i was able to find models for most of these items except for the Wallet which I have simply put a square object in place for now if I am not able to find a wallet model. All of the models are currently coloured in Green to make them stand out for this environment build of the environment.
Adding NPC Characters
When the player first starts the experience in my script they will see multiple NPC characters sat around them in the drivers carriage that they cannot interact with and are only there for a short time before the lights cut out and they are sent into the full experience.
I started by looking on turbosquid for male and female character models but then we also have access to Adobe Mixamo which has multiple character animations and base models which will work perfectly for this environment. So i started to download a few character models which had different animations for sitting such as idling, moving arms or sitting with their legs crossed etc. which I then placed around the train carriage duplicate i made for the starting and end area of the experience.
Next Steps
The next step for this environment build will be to create a drivers carriage area because I was not able to find a model that suited what i needed based on my script/storyboard and so this will need to be created in Unity or modelled in a 3D modelling software.
This week we had our final crit presentatations where we presented our current progress with each project and a chance to receive feedback and guidance on where to take the project next before the final deadline.
Stutter Draft Gameplay Capture
The video belows the current iteration of stutter and its gameplay which i presented to the class during the presentation showing the general gameplay, environment, sound and and discussing any changes and updates id like to make for the rest of the projects duration.
During the presentation i also displayed my curent progress with the game design document, world layout, player journey and the environment in unity etc.
Feedback and next steps
After presenting to my class and teachers I recieved some very helpful and positive feedback regarding the VR section of this experience. The main feedback was to add a few more elements possibly to the environment and to work on sound mixing and possibly making some changes to the word rotation animations and the fonts that are being used for the experience. I will of course also need to work on having a fully playable build with minimal bugs and polish for some user testing and further feedback and suggestions.
This week we had our final crit presentatations where we presented our current progress with each project and a chance to receive feedback and guidance on where to take the project next before the final deadline.
Blender Animation
The video below shows my current blender model and animation that i created for this AR piece. However i was only able to show it in blender as i was having some issues with exporting the model and animation for use in Unity which i will need to look into to continue this project further.
AR Ceramics Test footage
The video below shows my current experimentation with the AR Ceramics piece by using my original flower modela and animating them simply in unity to demonstrate how this would work in AR using a QR code as the target for the whole experience.
Feedback and next steps
After presenting the current progress of my AR project including the current models, animations, tests and game design document the feedback was very positive and agreed that it would be great to have the blender model fully working in unity to develop this idea and concept further to have a refined and polished experience. I will also need to start testing the experience on a mobile device using Vuforia.
Over the passed couple of weeks i have spent some more time working on my storyboard in tiltbrush while also learning some mroe about the software. I have gone through and tried to make sure everything matches and is in line with my script/story and making sure it is presented properly within the software.
I also spent some time learning how to use the camera paths within tiltrbush which allow you to smoothly record anything that you want within the software which is shown in the video below:
After focusing a majority of my time on this project developing the experience in Unity i spent this passed week going back and writing up the Game design document and making updates to the player journey for this experience.
Game Design Document
We were given a template/guide on how to create the game design document and using this guide i was able to develop and write a full GDD for ‘Stutter’. This document explains the story, gameplay, influences, mechanics, what sets this apart and the required assets. The player journey and world layout were also included in this document.
Initially my player journey only focused on displaying the correct and incorrect words for the player but I have now tidied up the journey and also included a world layout/environment layout to fully show and explaing what you will be doing during the entirety of the experience.
During the week I have been working on making some amendments or updates to my scripts and Game Design document for ‘Mind the Gap’ while also working more on my storyboard in Tiltbrush.
Mind the Gap Unity Developments
I have spent some more time working on the environment project build for Mind the Gap by continuing to use the train model I had previously found on Turbosquid. The train has now been set up so that there are two carriages together with multiple spotlights up and down the carriage to create the underground effect.
The whole environment is contained inside solid black walls to simulate the train being in the underground hopefully leaving the outside in the dark during the whole experience. Two lights have also been set up outside of the train which are animated on a loop to continously pass the trian on either side to give the impression of passing lights in the tunnel.
Some of the assets and objects are now also set up in the train carriage such as multiple luggage items being set up in the carriage which are grabbable by the player to show the ‘Test of Strength’. As you can see from the image below using Unity models i have also tried to create some hand rails as part of the ‘Test of Speed’ sequence for the player to complete.
The image below also shows the pressure pad that the usert will need to find and pick an item to place on which will unlock the second carriage in the experience.
Unity Next Steps
The next steps for the unity environment is to finalise all objects and stages in the experience so that everything is in place with little interactivity at this given stage. I also need to find a way to create a train drivers carriage in the front of the train for the final stage of the experience. At the start of the story the player will also see multiple NPC characters sat around the carriage and so i think it will be great to find some models and perhaps add some animations to add a sense of immersion or engrossment in the starting environment.
Updates to my Script
After recieving some feedback for the third draft of my script I started to go back further to refine and finalise my script to be in a more polished state. Most of the feedback was to clarify some meaning or reason behind some story elements while also including my detail and clarification on some of the level design and environment that the players will be engaging with.
During this week i have tried further experimenting with my concept for the AR experience in Vuforia. Using the model that i had previously created in Unity and recieving some feedback and input from my tutors I started to experiment with a larger quantity of the model to achieve full coverage of the Vase.
Using an approximate sphere model based on the actual side of the Vase I started to move, rotate and plot all of the different flower models around the Vase. This took me a lot longer than expected due to have to carefully position and rotate all of the models to give a curved effect around the object but I was able to eventually achieve this.
The imags below show the effect live in my webcam which luckily did work successfully and if you rotate the vase around it does give quite a convincing effect that it is moving around the curve with the Vase which is the effect I wanted.
Blooming Animation
While the models work correctly through vuforia by being displayed around the image target I also tried to use some of the animation options in Unity. This meant i could have a concept of the flowers changing sizes, rotating and also offsetting from one another to create a type of pattern. After experimenting with these different animations, I feel confident that this concept will work overall for the final experience.
Exporting my Blender Animation (problems)
Alternative Flower Model
Although the initial flower tutorial i followed did create a nice flower model which i felt would suit this project well, I decided to try out a different tutorial more based on a lily pad to see if I have more success exporting the animation. The screenshots below show some of the steps i followed in creating the flower model.
One aspect of this tutorial I quite enjoyed was the in depth look into the shader section of Blender and creating multiple colour options including shadows and textures which turned out very nice overall.
Unfortunately I still had some troubles trting to export my flower model where the animation does work and when i export the model in different ways only the central petal seems to have any animation and the outer petals dont seem to be moving. I will need to spend some time this week trying to find ways to export my models to then further experiment in vuforia.
Recording Sound
This main theme of this concept is of course around flowers and plants so sound effects of this nature would make sense for the AR experience. I started to go around outside and luckily I am nearby an ecology park where hopefully I would be able to record some different sound effects. I also want to record some water dropplet sound effects now that i have the option of using the lilypad model animation which i think will be suitable for my AR experience.
Next Steps
The next steps for this Augmented reality experience is to achieve a successfully exporting blooming model from Blender and to also find and print a better more visually appealing image target to place on the Vase. As a stretch goal it would be interesting to have another option for an image target which has a different animation effect or plant/flower.
One of the main aspects for this experience was of course to add some hand models so the player can see where there hands are in a virtual space but this is something I left until later. I started to research online ways to add hand models to the XR interaction tool which didnt seem to difficult overall because you can attach and resize imported objects to each controller which will then appear on screen. At the moment the hands are currently static models that dont have any motion or animations but I am hoping that gesture or grab animations wont take too long to implement into my gameplay.
Adding Navigation
Navigation is another aspect where originally I wanted the experience to only work if the user can actually physically step around the environment but then of course I needed to think about the accessibilty of the experience and how some people might play sitting down. Researching more into the XR interaction tool it was also not too difficult to add quite a simple form of navigation to the experience.
I have now added a continuous move provider which is based on a locomotion system so that using the left hand controller joystick the player can move around the environment at a relatively fast pace which give the oppurtunity to further explore the environment they are playing in. I then included a Continous turn provider which allows the user to turn the camera left and right using the right hand controller. This is currently set to a continous motion which after a few attempts gave me slight motion sickness so i want to research into this further to have set turns that will isntantly turn the player in 60 degree increments for example.
Boundary Walls
After adding navigation it became apparent to me that the boundary walls that i was going to originally add to the experience might not have a use to stop the player falling out of the environment. This was quite simple to add walls which have a collider and rigid body with no mesh to create a visible boundary that the player cannot cross to try and leave the environment.
Introduction Screen
After having our lesson/class on ethics in virtual reality i knew that it was extremely important to include some information or a disclaimer screen message to explain and justify the experience to the user. I also fet that including a title screen also just helps to round off the whole game as a full experience with a clear start and a clear end. To create this I decided to use a series of animations to keep it simple so that tht title ‘STUTTER’ would appear followed by the first disclaimer and then the tutorial informaiton for how the player can start the experience.
Outro Screen
The Outro screen was made in a very similar way to the introduction screen which was made through a series of animations all timed to fade in and out after one another. The outro screen starts by displaying a message of information about speech impediments and Stuttering which is followed by a message that i think is important to show players to wrap up their understanding of the full experience.
Typography/Word Updates
After some initial viewings and suggestions to my VR experience, I was told by multiple individuals that the typography that i had chosen is quite difficult to read and if I had done this on purpose. I knew that after quite a few people had mentioned this that i would need to change the typography. So i decided to go back into my original Illustrator file to change the fonts to something much more legible as you can see from the images below.
Difficulty Transitions
For this project I wanted to avoid having different levels and transitions to keep the scope and time frame for this project achievable in the given deadline. This meant that transitions between my difficulty ‘layers’ would be one of the most important aspects and something that I knew i would need to leave until last after everything else in the experience is completed.
This week i focused on ways that i could have the transitions take place by fading in and out the different sections or ‘Parent’ objects that i have seperated each difficulty, disclaimer and conversation. To start this process I created a canvas that would be triggered using an animationt to create a fade to black effect by block the players view while different sections load in and out of play. This was then achieved by identifying each section as a game object and using a ‘StartCouroutine’ and ‘IEnumerator’ i could time the transition effect between each difficulty level.
Each of the ‘CorrectWord’ boxes were then assigned a counter so that every time a correct word was shown in the box, the counter would go up by 1 each time. This then meant that if all three words were correct in the easy difficulty the counter would go up to 3. The transition was then triggered by an if statement so that if the counter was at 3, then the animation fade would start and the next difficulty level would load in. Each difficulty was assigned a certain counter number and so on which luckily achieved this effect with quite great success. I still need to play around with the timings and length to see what works best but im confident over time I can find a good length of time between levels.
Next Steps
The next steps for this experience is to start some user testing to see if all of the gameplay mechanics and understanding of the experience works for the user or player. I will also need to go back over and check the sound and audio for the whole experience and make sure that the timings for transitions and difficulty lengths work together to make a satisfying game for a player.
Over the past week i have tried to finalise most of the main aspects for this project which are the script, Game design document and the storyboard for the full experience.
Game Design Document:
I have now developed a full game design document for the ‘Mind the Gap’ Experience including details of dialogue, sound, environment and step by steps of each section and puzzle for the user.
I have now tried tried to to further develop and have all of the scenes and areas of my experience now drawn out in tiltbrush. The first scene below is where the player will start and showing multiple NPC characters who are sat around the train carriage environment.
This scene shows the first test which is the ‘Test of Hearts’ in which the playe will need to choose one of four items which are located and highlighted around the carriage. One of these items of choice will then need to be placed on the highlighted pressure pad located on the left hand side of the door to gain access to the next carriage and continue the experience.
The last scene of the experience is where the player will meet the Antagonist of the experience who is posing as the train driver. I did not have a smaller model for this area so this was hand drawn in tiltbrush which i may try to edit later as it does not fit the aesthetic of the rest of the storyboard.
Now that i have looked at ways to develop a flower model and animation for my AR project i wanted to test how successfully these models will work in action and on a curved surface such as a Vase. I tried some initial tests with the image target of my notebook i was using last week and this again seemed to work fine on my webcam with the model showing up as a high quality which is reassuring for what i had created in Blender.
Full disclaimer i must admit that the environment i was testing all of my assets in was not very well lit but worked overall for my initial tests. I found a small vase i could use for the tests which may not be the final object that i use for the project but will work great as a proof of concept to see how I can model everything onto a curved surface.
I then printed a small QR code to simple use for these tests as i would like to have a more appealing way or image target on the final product for the end result. The QR code was then taped to the vase just to act as a simple trigger for the experience and once again using the flower model I had previously created I made a new image target. Luckily (Although lighting did not help) the image target was successful and the flower did appear on the side of the Vase which i was extremely happy about and was excited to move forward testing out this idea.
After this i started to move the flower models outside of the QR code and in different areas on all X,Y and Z axis to see how it would all interact. Each model was moved in different rotations and all had animations to make them spin within the area to see how movement would work. Luckily after testing this it also worked successfully overall and through trial and error I could the models to look like they are balanced on top or coming from the Vase. This worked even with my low light conditions during testing and so i was fairly confident this would work a lot better in a well lit environment or perhaps through a mobile device with a flash light.
I also wanted to do a small test to see what it could possibly look like with the flowers actually blooming from the Vase and so i tried a simple animation of the objects scale going from 0 to full size to show this effect and again this worked very well as a proof of concept as shown in the videos below of the process.
Next Steps
I am happy with how the initial tests worked for these models and i think that the next stages will be to develop better models and animations to actually be triggered on the Vase and another great way will be to add interactivity buttons which appear first on the ceramics after they are scanned by a mobile device. This could also be developed where i have multiple Vase’s with different animations or models appearing to show how this could be used in an exhibition with multiple ceramic pieces for users to experiment with during the experience.