I will share some info about the audio aspects of Kanien’kéha XR spaces and experiences. Virtual Reality production with UnReal Engine, Unity3D, Android Studio and XCode enables us to include Spatial Audio features. Spatial audio is also referred to as 3D Audio: makes it possible for you to hear the sounds coming from specific locations within extended reality spaces (3D, VR, AR, 360, 180, XR). Kakwitene VR lets you hear Kanien’kéha emitting from each colorful flower and pollen sphere that you touch, for example.
Sometimes fluent speakers do not want to be in front of cameras and they don’t want their voices to be recorded either. They prefer to have the software engineers and tech team to learn how to speak Kanien’kéha with our own voices. If I am not familiar with or comfortable with speaking a word or phrase myself: I ask the fluent speaker to help me as I record myself repeating exactly what the fluent speaker says so that I can either use the audio recording of my own voice in a production or have a reference to work with later for full productions with edited audio, effects and more as needed.
When I’m producing new Kanien’kéha audio files I also have to keep the file size as small as possible and saved out to a few different file formats so that it will be accessible for mobile devices, desktops, tvs, text to speech solutions, speech engines and more. This also requires each audio file to be hyper focused on one word or phrase with the meta data correctly labeled to work and play nice with the wide variety of devices and end user solutions. The AI work that MoniGarr’s team is doing for Kanien’kéha Revival & Retention requires clean data that includes relevant and respectful audio, video, text, animations that all work together to be re-useable in a huge list of end user solutions and scenarios that enable communications, entertainment, media and every industry imaginable to support Kanien’kéhake (people) that choose to communicate with our own families’ and communities Kanien’kéha dialects. This work flow has made it possible for MoniGarr to integrate Kanien’kéha speaking bots with microsoft text to speech engines, into remote control animatronic vehicles on a public road in the early 2000s. That same Kanien’kéha knowledge base (with over 90k nlp patterns) is still use-able and relevant today in 2022 for XR devices, mobile devices, wearables and any communication device imaginable.
A few of the resources I use when producing Kanien’kéha (Mohawk Language) spoken words, music, sound effects that you might find useful for your own projects include:
- CyberLink Studio
- Mobile Devices with microphone to record audio while sitting alone in my car in a quiet place. Yes, a car with your mobile device is an excellent sound studio!
- Yeti Microphone on my old Mac with GarageBand.
- GarageBand on mac and iphone
- Android Studio
- XCode
- UnReal Engine
- Unity3D
- SparkAR
- Lens Studio
- Effect House
- Insta360 Studio
- Various A.I. Generators (music, sound effects, music production / editing)
- Dolby.io
- Google VR Resonance Audio, Cardboard SDK
- Facebook 360
- YouTube VR
MoniGarr can be hired to provide Mohawk Language XR presentations, demos and workshops with high level general information and complex technical instructions: based on your groups own projects, goals and interests.