Yin-Gemis
Method

Method

The how and why


The VR environment was developed using data from the interviews, an AI model, and a webVR application. We’ll walk you through the steps of how and why we made this visualization.

Interviews are a good way of collecting qualitative data, in our case stories and women’s voices and experiences. This method is often used when researching women’s life, inequality and feminism (Roulston & Choi, 2018). Text is also required for the AI story, it adds additional expression and data. We might also delve more into a response and seek examples from the participant.

We transcribed the interviews, translated them into English, and asked a professional (with an MA degree in English literature) to examine and rectify small errors to ensure the quality of the translations. In the questions, the sentences are likewise tagged and categorized. This allowed us to later trace back some words/sentences used to obtain context. Finally, we had an excel file with around 4000 English sentences ready to be used by an AI model for training to create the generated story.

AI answers

We used an Ai text generator, which uses phrases from books as training data to create a new text by predicting the next potential word. To do so, the text generator requires an input text, this could be anything related to the trained data. We did the following to generate the Ai answer you hear in the VR space:

  • We replaced the original book data with sentences (3959) from the interviews, allowing the model to train and anticipate the next words.

  • We used the questions we asked during the interview as the required text input. The model has a 60% accuracy rate, which is rather good for a model.
  • Next, we indicated the number of predicted words, resulting in the AI text. There were no periods, capital letters, or any type of formatting in the text.

  • We chose sentences from the text and merely added periods, capital letters, and commas to make the sentences comprehensible, resulting in the AI answer text you hear in the VR space. The text was voiced over by a voice over actress.

Indeed, the sentences do not always fit and flow well; after all, the AI has been trained on 22 interviews; the more we train the AI, the more it will fit the themes and create a cohesive narrative. We also discovered this after conducting preliminary testing with several interviews.

AI model


This text generator uses TensorFlow, a machine learning software library by Google.

TensorFlow in short and what it does:

  • Deep learning algorithms, equations, formulas, and mathematical notations can now be accessed as a program for practical uses.
  • It can be used for, among other things: speech recognition, computer vision and natural language processing
  • For example, it is used by Google to rank search results by recommending words or phrases with comparable meanings for sections of queries that are unknown.
    (Goldsborough, 2016)
  • In short what the text generator does:

    Removes stop words such as the, a or an. It tokenizes the text by assigning the words with a number, and replaces all the words with that number. A model is then created and trained (with TensorFlow) to improve the probability. Finally, output as an AI-generated text is the result.

    We use AI because we want to convey the story in a unique and creative way. It also demonstrates the women’s stories’ alignment, and it would become an anonymous story that does not actually belong to anyone. It’s a gathering that allows for personal interpretation. Furthermore, reading that story will need less effort, and it will hopefully create greater awareness, as opposed to reading a report or paper, as it may reach a different audience. 

    The text generator worked within this notebook we used, thanks to Jeremy Chow. Jupyter Notebook is a web tool that allows you to create, execute, and share Python documents directly from your browser.

    VR environment

    To convey this story, VR provides an immersive experience, also by using audio and visual components, and it will then leave a stronger impression. We used webVR because it’s more user-friendly. The Louvre Online Tour, for example, allows users to digitally stroll around the Louvre and explore its masterpieces. Viewers do not need VR glasses to explore the area; all they need is a browser. Of course, VR glasses will enhance the experience, but they are not required.

    To build this virtual reality space, we asked a graphic designer to create a video that matched our theme and aesthetic. The outcome is the video you see in the background. We also asked a soundscape artist to create a background sound that is soothing, reminiscent of technology, and distinctive to our setting and concept. The AI output was voiced over to increase the immersion and engagement within the space. In addition to these features and the quotes we designed, we included navigation components to assist viewers in navigating the space. The AI answer animation was also co-created with a designer.

    For a variety of reasons, we included quotes from the interviews. The quotes had a big impact on us and the people we showed our project to. We received feedback that the project lacked nuance, therefore we included the quotes to provide nuance and context. The quotes were chosen based on the AI-generated results to establish a link between the AI output and women’s actual input.

    VR enhances immersion: caught in another world, not distracted by the outside world, a situation in which the user is fully involved and immersed in a task, possibly leading to awareness (Mütterlein, 2018). And interaction, user are able to ‘walk around’ and interact with the story, and thus being in command of one’s actions results in a greater sense of immersion and flow (Mütterlein, 2018).

    For this project, we used Pano2VR to create the webVR environment. We broke down the story in different slides according to the questions we asked the AI. Each slides contains quotes and the AI answer. Read more about the VR and explore the space.

    We would like to thank everyone who helped us create the VR space:

    Background music
    Hyunji Jung


    Background video
    Jerry Estié


    Voice-Over
    Leah

    Font logo
    Beatrice Caciotti

    Font quotes
    Nina Stössinger

    Animation
    Joao Loupatty

    Help with translations
    Baan Al-Othmani

    Understanding VR principles
    Ronald van Essen

    References

    A Tour of TensorFlow (Goldsborough, 2016)

    The Three Pillars of Virtual Reality? Investigating the Roles of Immersion, Presence and Interactivity (Mütterlein, 2018)

    Qualitative interviews. The SAGE handbook of qualitative data collection. (Roulston & Choi, 2018)

    Photo: Fleur@yer_a_wizard