The promise of spatial audio in gaming (and for all other audio markets) sounds compelling: a fully three-dimensional soundscape that envelops the listener, provides critical gameplay cues, and deepens immersion into virtual worlds. We hear terms like, audio objects, binaural, HRTFs, ambisonics, atmos, and envision experiences where sound isn't just an afterthought, but a fundamental layer of reality [1, 2]. The benefits are undeniable: heightened immersion that makes virtual worlds tangible, pinpoint localization of sounds/players that offers a critical tactical edge, and an unparalleled sense of presence that should make you feel "there" [1, 3, 4].
But for many developers and, crucially, for players, these promises often collide with a frustrating reality. The ultimate goal is always to provide a fantastic player experience, yet for spatial audio, the burden of configuration has frequently and unfairly fallen on the player.
Would we expect players to install software or change OS display settings just to see the visuals correctly? Unlikely.
While developers can't control the specific type of headphones or even laptop speakers a player uses, much like they can't control the exact display or VR headset, they can and should control the audio signal and spatial rendering before it reaches those devices. This ensures that the vast majority of players, who use headphones or even standard laptop speakers, get a great binaural mix right out of the box (yes, binaural; no need to be afraid as long as you control it; everybody is on headphones; who uses a 7.1.4 setup at home?).
I believe that the user shouldn't need specialized knowledge of spatial audio formats, or navigate out of game sound settings to activate it, or install extra software to make the game sound as intended. When the meticulously crafted spatial mix is instead subjected to the "spatial audio lottery": a gamble on the end-user's specific knowledge, their OS settings, the chosen (or default) system spatializer, and driver versions, then the user experience suffers.
It's in this gap that solutions like atmoky trueSpatial for Wwise enter the conversation, not just as another plugin, but as a potential paradigm shift, empowering developers to deliver consistency and reclaim control, ultimately benefiting the player with an effortless, high-quality auditory experience.
While experts might still appreciate options to tweak, the baseline user experience should be excellent for everyone.
The Alluring Promise vs. The Fragmented Reality
At its best, spatial audio leverages technologies like:
- Audio Objects: Discrete sounds with 3D positional metadata, allowing them to be placed and moved precisely within the game world [3, 5, 6, 7].
- Audio Beds: Traditional channel-based audio (e.g., 7.1.4) that is pre-produced and provides a foundational ambience or linear content [2, 8].
- Ambisonics: 3D sound field recordings or productions that are ideal for VR and more dynamic ambiences [2, 9, 10].
- Binaural audio based on Head-Related Transfer Functions (HRTFs): Crucial filters that process audio for headphones, creating a ‘natural’ 3D effect by mimicking how our head and ears shape sound [6, 11, 12].
When these elements work in harmony, the player experience is transformed. And, yes, you can definitely hear the difference between a stereo and a fully 3D-binaural interaction.
Quick comparison of binaural to stereo for a source in the horizontal plane. In the vertical plane the differences will be even more severe.
However, the path to a consistent spatial audio is difficult when relying on system-level spatial audio processing. And here is why!
The first thing to keep in mind is, that different build-platforms (PC, Xbox, PlayStation, etc.) and even different OS versions can have different spatial audio APIs and rendering behaviors. There just is a platform inconsistency, and developers are the ones to ensure that the mix translates accurately across every platform or, alternatively, sound designers need to create different mixes for each one of them [2, 13, 14].
Then there is also a risk for reprocessing or "double processing". So let’s say, one platform has its own ‘virtual spatial audio’ engine and applies its system-level spatializer (like on iOS, or when using Windows Sonic or Dolby Atmos for Headphones), the result is often degraded audio quality, phasing, comb-filtering, and an unnatural soundstage [4, 15].
And then there are many factors controllable only by the users themselves. Let’s focus on the Fragility of User Configuration:
- Hardware Variance & the Dominance of Headphones: The audio output devices used by players range from basic stereo headphones and earbuds to high-fidelity gaming headsets, desktop stereo speakers, soundbars, and really few multichannel home theater systems [16, 17]. Industry observations and market trends consistently indicate that stereo headphones are the predominant audio interface for a significant majority of gamers, particularly in PC and console gaming [4, 18]. This preference is driven by factors such as immersive potential, cost-effectiveness, private listening, and the convenience of integrated microphone solutions in gaming headsets. While dedicated loudspeaker setups, including surround sound systems, offer a different kind of immersive experience, their adoption is typically limited to a smaller segment of the player base due to cost, space, and setup complexity [2]. Even users with capable loudspeaker systems may opt for headphones for competitive play or late-night sessions. The vast differences in audio devices, especially headphones, mean a mix tailored for a high-end system can sound flawed on consumer hardware. System spatializers further complicate this. Therefore, delivering a consistent, high-quality binaural output from the game is crucial. This ensures a great initial experience for most players without needing technical knowledge or adjustments. While advanced users might want customization, the default audio should be excellent for all, including those using basic headphones or laptop speakers.
- Hidden and Confusing OS-Audio-Settings: Enabling and configuring spatial audio technologies like Windows Sonic, Dolby Atmos for Headphones, or DTS Headphone:X on platforms such as Windows or Xbox requires users to navigate through sound settings menus [19, 20]. Other OS don’t support this at all, and many players may be unaware these settings exist, unsure how to configure them correctly for their specific hardware, or uncertain which of the available options is preferable or even compatible. Some solutions, like Dolby Atmos for Headphones, also involve an additional purchase and app installation, which is a bit absurd.
- Inconsistent Adoption/Usage: There is a notable lack of clear, recent, and granular statistical data detailing precisely how many gamers actively use and, more importantly, correctly configure OS-level spatial audio features. While market reports indicate strong sales of gaming headsets [18, 21], this does not translate directly into adoption rates for specific software-based spatial audio solutions on the host system. Broad surveys like the Steam Hardware Survey provide general PC specification trends but typically do not delve into the specifics of audio output configurations or spatial audio software usage [22, 23]. Developers simply cannot assume that players will have spatial audio enabled or optimally configured on their systems [17]. This uncertainty makes it exceedingly risky to design a game's core audio experience to be dependent on such user-managed variables.
My verdict is, that even with a powerful middleware like Wwise, which provides the tools for authoring object-based audio and routing to various endpoints [5, 8, 24], the final output can still be subject to the limitations and behaviors of the OS-level spatializer. The System Audio Objects available per platform can be different and finite, and if the limit of objects is reached, sounds can be folded down, losing their discrete spatial identity [8, 13]. Settings like "Allow System Audio Objects" in Wwise Authoring also highlight the complexities of managing these resources during development [25].
.png)
Signal flow for rendering spatial audio in the game vs. sending audio objects to third-party or OS-level spatializers.
However, it seems that there is growing consensus: to achieve true artistic intent and a reliable, high-quality spatial audio experience, developers sound designers need to move towards in-engine authoring, control, and rendering. This "it just works" philosophy is paramount and is achievable with in game rendering.
Enter atmoky trueSpatial for Wwise: A Bid for Auditory Consistency
This is where atmoky trueSpatial for Wwise positions itself as an overall solution [25]. And it is not a set of effects; it's a comprehensive spatial audio rendering engine designed to operate within Wwise (integrations for FMOD, native Unity and Unreal Engine are also available), giving developers and sound designers end-to-end control over the spatialization process, particularly for headphone users.
The core of atmoky trueSpatial is to provide consistent, predictable, and high-fidelity spatial audio rendering, irrespective of the end-user's platform, device, or system settings. We aim to take the "lottery" out of spatial audio by ensuring the game ships with its own advanced spatializer, delivering that crucial out-of-the-box quality [26, 28]. This means players can seamlessly switch between headphones and loudspeaker setups, and the game's audio, processed by trueSpatial, adapts to provide an optimized and consistent spatial experience for that new device without requiring manual changes in additional software from the player.
How atmoky trueSpatial Aims to Deliver on the Promise
atmoky trueSpatial integrates into Wwise as a suite of plugins, most notably featuring the ObjectRenderer and Renderer [28]. Here’s a breakdown of its key technological pillars and why they matter in addressing the challenges outlined:
Perceptually-Optimized, In-Wwise Rendering:
- What it is: trueSpatial uses atmoky’s proprietary and patented 3D audio processing. This isn't just about mathematical panning; it's built on a "perceptually-optimized model" designed to simulate human hearing with high accuracy. Crucially, this rendering happens inside Wwise, meaning inside the game, before the audio hits the operating system's final output stages.
- Why it matters: This directly tackles platform inconsistency and the risk of reprocessing artifacts. By rendering the final binaural (or, of course, also multichannel) output within Wwise while employing the trueSpatial engine, ensures the developer's intended spatial mix is what gets delivered. It effectively bypasses the need for, and the potential interference from, varying OS-level spatializers.
A/B comparison of various spatializers. Object is a static white noise moving on a counter-clockwise half circle.
The "Externalisation Booster" (for binaural only):
- What it is: A common hurdle for headphone spatial audio is the "in-head localization" phenomenon. trueSpatial includes an adjustable "Externalisation Booster" specifically designed to counteract this, creating a more natural perception of sounds originating from external points in space [26, 29]. For a 2-channel output, you can choose to prefer stereo or binaural output. In case of a binaural output, we offer a dedicated externalizer unit. We encourage its use, especially if no additional spatial reverb is employed.
- Why it matters: Achieving good externalization is key to believable immersion and reducing listening fatigue.
Near-Field & Parallax Effect Simulation:
- What it is: In practice sounds very close to the listener behave differently than distant sounds. trueSpatial incorporates near-field rendering, including parallax effects, to simulate the acoustic nuances of proximate sources [26, 28, 29].
- Why it matters: This adds a significant layer of realism and intimacy, especially for interactive elements or character interactions close to the player. It’s a detail often overlooked by simpler spatializers.
Advanced Acoustic Realism: Occlusion and Directivity:
- Sound Object Occlusion: trueSpatial allows for modeling how in-game objects block and filter sound (in Wwise this is handled out of the box and not by trueSpatial) [26, 29].
- Sound Source Directivity: Developers can shape how sound radiates from a source (e.g., a character speaking forward vs. an omnidirectional ambient sound) [26, 29].
- Why it matters: These features move beyond simple 3D positioning to create a more dynamic and believable acoustic environment. Occlusion and directivity are vital for conveying environmental context and improving the clarity and realism of specific sound sources, directly impacting gameplay and immersion.
Comprehensive Soundscape Support (Objects, Ambisonics, Beds):
- What it is: While excelling at object rendering, trueSpatial is also designed to handle Ambisonic soundfields and traditional channel-based beds, rendering them appropriately within its spatial engine [26].
- Why it matters: This allows developers to use the best audio format for each element of their sound design within Wwise, knowing that trueSpatial can cohesively integrate them into the final spatialized output.
Cross-Platform Consistency as a Cornerstone:
- What it is: A major claim by atmoky is that trueSpatial delivers the same rendering result and audio quality regardless of the platform, device, or user's endpoint settings [26, 28]. trueSpatial supports Windows, macOS, Linux, iOS, Android, VisionOS, PS, Xbox, and Switch.
- Why it matters: This is the holy grail for many developers. It means the spatial audio mix and blend created in the studio is precisely what the player hears, whether they're on a high-end PC, a console, or a mobile device. It directly solves the "fragmentation problem" and the unreliability of user configurations.
High Performance and Efficiency:
- What it is: We at atmoky emphasize a small memory footprint and high processing performance for our plugins [26].
- Why it matters: Sophisticated spatial audio processing can be CPU-intensive. Efficiency is crucial to ensure that implementing high-quality spatial audio doesn't unduly impact overall game performance, especially when dealing with many dynamic sound objects. You can find more on performance considerations here: https://atmoky.com/blog/on-the-performance-of-processing-in-game-audio
Dissecting trueSpatial for Wwise: Our Plugins & Your Workflow
At atmoky, we've designed trueSpatial for Wwise with the philosophy that spatial audio should be intuitive to implement and utterly reliable in its output, no matter the platform or device. Here you can learn more about the core components and the workflow for using them in Wwise.
atmoky ObjectRenderer (with integrated Metadata Control): This is your primary engine for spatializing individual Wwise Audio Objects.
- Purpose and Wwise Workflow: You'll typically insert our atmoky ObjectRenderer as an effect plugin on an Audio Bus in Wwise's Master-Mixer Hierarchy. This bus should be configured to handle Audio Objects (set its bus configuration to "Audio Objects"). Any sounds in your game that need precise 3D positioning – think character footsteps, specific sound effects, enemy calls – are routed to this bus. Our Object Renderer then takes these objects, along with their positional metadata plus metadata on source properties (see MetaData plugins), and applies our proprietary spatialization algorithms. It's built to handle a substantial number of audio objects (no inherent limit) efficiently within the software, so your players won't need specific audio endpoints pre-installed or any special hardware. Apart from ‘classical‘ Audio Objects, the Object Renderer also handles beds without positional updates.

UI of the atmoky ObjectRenderer in the Wwise workflow. The objectRenderer allows for seamless output switching between binaural and loudspeaker setups. There is no fixed limit on the number of objects - pre-allocation
- Integrated Metadata Control: The atmoky ObjectRenderer works hand-in-hand with what can be conceptualised as atmoky Metadata features or plugins. While some parameters are inherent to the renderer's default behaviour, you gain direct and granular control over individual audio object parameters like Source Directivity, Near-Field Effects, Stereo Width, and LFE Gain [28]. These metadata controls allow you to fine-tune how each object is rendered, and the Object Renderer applies these specifics accordingly.

UI for setting the source directivity of a source. This data is adjustable for each object separately and feed to the ObjectRenderer for processing.
- Output Flexibility: Our Object Renderer supports all state-of-the-art output formats. You decide whether the final output should be binaural for headphones (our specialty for player experience!), standard stereo, multi-channel for loudspeaker setups (e.g., 7.1.4), or even up to 5th order Ambisonics for scene-based audio.
- Bypassing System Interference: A key design principle is ensuring that audio processed by our Object Renderer isn't then mangled by system-level spatializers. Even if Wwise's "Allow System Audio Objects" is enabled in Authoring Preferences for other purposes, sounds routed to the atmoky ObjectRenderer bus are handled by our engine. We generally recommend disabling "Allow System Audio Objects" when primarily using trueSpatial to simplify the signal path, but our plugins can coexist if you have specific needs for routing some objects to system endpoints and others through trueSpatial.
atmoky Renderer: Think of this plugin as our versatile workhorse for everything that isn't an individual dynamic audio object, or if you prefer a bus-based mixing workflow (e.g. using 7.1.4 as an intermediate format). It can also function as a powerful master output renderer, akin to our "atmoky Ears" plugin, ensuring the entire mix is perfectly tailored for the chosen output.
- Wwise Workflow: You can use our atmokyRenderer in several ways as either the Master Output renderer (place it on your Main Mix or Master Audio Bus), or to render beds and sub-mixes. It takes the complete game mix (or specific submixes) and renders it to the desired final output format (binaural, various loudspeaker configurations). This is absolutely key for enabling that seamless switching between output formats for the player.
- Capabilities: Its ability to render "any input format to any output" [28] offers incredible flexibility. When used on the master output, it guarantees that whether a player is using headphones or switches to a multi-channel speaker system, our trueSpatial engine intelligently adapts the rendering. This provides an optimized and consistent spatial experience for that specific endpoint, crucially without the player needing to touch any OS or third-party settings.
Getting Started with atmoky trueSpatial in Wwise
We've made getting started with trueSpatial in your Wwise projects as straightforward as possible:
- Visit the atmoky Developer Hub: Your first stop is our official developer documentation website: https://developer.atmoky.com/true-spatial-wwise/
- Download & Evaluate: From the hub, you can download the trueSpatial plugin suite. Installation is a standard Wwise plugin process, integrating trueSpatial into your authoring tool and game's sound engine, as detailed in our documentation. We offer trial licenses so you can fully evaluate the technology and explore its capabilities right within Wwise [26, 28].
- Licensing: After evaluation, commercial licenses are available, typically structured based on your project's scope [26]. At the time the title is released, a commercial license has to be in place. We believe this direct approach allows your team to quickly get hands-on and see how trueSpatial can elevate your game's audio.

Getting started with trueSpatial for Wwise in under 10 minutes.
Why Choose atmoky trueSpatial?
While Wwise offers robust spatialization features, it lacks built-in binaural output and an audio object renderer. Other third-party spatializers exist, but we designed atmoky trueSpatial for Wwise to differentiate itself through:
- Deterministic, In-Wwise Rendering for Unwavering Consistency: This is central to our design. By taking control of the final spatialization before the OS can interfere, we offer a level of predictability that system-dependent solutions struggle to match. It’s about empowering you to ship your definitive renderer with the game, ensuring a quality baseline for all players.
- Holistic, Focused on Perceptual Realism: trueSpatial is more than just a panner; it’s our collection of tools (externalization, near-field, occlusion, directivity) engineered to address specific perceptual challenges in creating believable 3D sound. Our "perceptually-optimized" approach is built on a strong foundation in psychoacoustic research, drawing from academic work in areas like advanced HRTF modeling, Ambisonic rendering, and optimizing binaural output based on how humans actually perceive sound localization cues [39, 40].
- Empowering Developers, Simplifying for Players: By embedding our advanced rendering within Wwise, you, the developer, gain deep control. Crucially, this translates to a superior spatial audio experience for players by default, without them needing to become audio technicians or dive into system settings. Our system is designed so players can seamlessly switch between headphones and loudspeaker setups, with trueSpatial adapting the rendering to provide an optimal experience for the chosen output. This perfectly aligns with the "it just works" philosophy.
The Impact: Elevating the Auditory Dimension
For game audio professionals using Wwise, our trueSpatial solution offers the potential to:
- Realize Artistic Vision More Faithfully: Deliver complex, nuanced spatial mixes with greater confidence that they will be heard as intended by the broadest player base.
- Reduce Cross-Platform Headaches: Spend less time troubleshooting platform-specific spatial audio quirks and more time on creative sound design.
- Push Creative Boundaries: Utilize advanced spatial features like detailed occlusion and directivity to build more immersive and interactive sound worlds.
For players, the benefits are even more direct and immediate:
- Deeper, More Believable Immersion Instantly: Sound that genuinely feels like it’s part of the game world from the moment they launch the game, enhancing presence and engagement without any setup fuss.
- Clearer, More Actionable Spatial Cues Out-of-the-Box: Improved ability to locate sounds, which can be critical for gameplay, especially in competitive titles, accessible to everyone.
- A Consistent High-Quality Experience for All: Reliable spatial audio that works as intended, especially for the vast majority using headphones or laptop speakers, without requiring technical expertise or additional software. Players can switch between headphones and speakers, and the spatial audio experience remains optimized and consistent, thanks to the in-game rendering handling these transitions.
Conclusion: Taking Deliberate Control of the Spatial Soundscape for the Player
The journey of game audio from simple stereo to, interactive 3D soundscapes has been remarkable. However, the final hurdle has often been the "last mile": ensuring that the carefully crafted audio experience is delivered intact and effortlessly to the player. Relying on a patchwork of system-level solutions has proven to be a fragile and often frustrating model, placing an unnecessary burden on the end-user.
Technologies like atmoky’s trueSpatial for Wwise represent a significant stride towards resolving these challenges. By betting on in-engine (or, in this case, in-middleware-controlled) rendering, prioritizing perceptual realism, and delivering a consistent cross-platform experience, we offer developers the tools to reclaim full artistic and technical control over the spatial soundscape.
”This isn't just about better audio for developers; it's about delivering a hassle-free, and deeply immersive audio experience for every player, right from the first play.”
The future of game audio is not just spatial; it's spatially consistent, developer-defined, and player-centric.
References
- Pulkki, V., & Karjalainen, M. (2015). Communication acoustics: An introduction to speech, audio and psychoacoustics. John Wiley & Sons. https://www.wiley.com/en-nl/Communication+Acoustics%3A+An+Introduction+to+Speech%2C+Audio+and+Psychoacoustics-p-9781118866542
- Rumsey, F. (2011). Spatial audio. Focal Press. https://www.routledge.com/Spatial-Audio/Rumsey/p/book/9780240516233?srsltid=AfmBOoql0er0LCSpnlCEEfwerj1IzNxqMGROjAOGFV3xf_irmDg34K6i
- Microsoft. (n.d.-a). Spatial sound. Microsoft Learn. Retrieved June 4, 2025, from https://learn.microsoft.com/en-us/windows/win32/coreaudio/spatial-sound
- GDC. (various years). GDC Vault. Retrieved June 4, 2025, from https://www.gdcvault.com
- Audiokinetic Inc. (n.d.-a). 3D Audio. Wwise Documentation. Retrieved June 4, 2025, from https://www.audiokinetic.com/en/public-library/2024.1.5_8803/?source=Help&id=working_with_3d_objects
- Geier, M., Ahrens, J., & Spors, S. (2010). Object-based audio reproduction and the audio scene description format. Organised Sound, 15(3), 219-227. https://api-depositonce.tu-berlin.de/server/api/core/bitstreams/9876da4a-21a5-456a-965e-cc2b2240a917/content
- Ahrens, J. (2012). Analytic methods of sound field synthesis. Springer. https://link.springer.com/book/10.1007/978-3-642-25743-8
- Dolby Laboratories. (n.d.). Dolby Atmos for Game Developers. Dolby Developer. Retrieved June 4, 2025, from https://games.dolby.com/atmos/
- Noisternig, M., Sontacchi, A., Musil, T., & Holdrich, R. (2003, June). A 3D ambisonic based binaural sound reproduction system. In Audio Engineering Society Conference: 24th International Conference: Multichannel Audio, The New Reality. Audio Engineering Society. https://www.researchgate.net/profile/Alois-Sontacchi/publication/228888484_A_3D_ambisonic_based_binaural_sound_reproduction_system/links/0046351b9d5995164a000000/A-3D-ambisonic-based-binaural-sound-reproduction-system.pdf
- Zotter, F., & Frank, M. (2019). Ambisonics: A practical 3D audio theory for recording, studio production, sound reinforcement, and virtual reality. Springer. https://link.springer.com/book/10.1007/978-3-030-17207-7
- Blauert, J. (1997). Spatial hearing: The psychophysics of human sound localization. MIT Press. https://mitpress.mit.edu/9780262024136/spatial-hearing/
- Møller, H., Sørensen, M. F., Hammershøi, D., & Jensen, C. B. (1995). Head-related transfer functions of human subjects. Journal of the Audio Engineering Society, 43(5), 300-321. https://vbn.aau.dk/files/227875164/1995_M_ller_et_al_AES_Journal_c.pdf
- Audiokinetic Inc. (n.d.-c). Using the System Audio Device. Wwise Documentation. Retrieved June 4, 2025, from https://www.audiokinetic.com/en/public-library/2024.1.5_8803/?source=Help&id=system_audio_device
- Amatriain, X., Arumi, P., & Garcia, D. (2008). A framework for efficient and rapid development of cross-platform audio applications. Multimedia Systems, 14, 15-32. https://amatria.in/pubs/clam-mmSystems.pdf
- Microsoft. (n.d.-c). AudioObjectType enumeration (spatialaudioclient.h). Microsoft Learn. Retrieved June 4, 2025, from https://learn.microsoft.com/en-us/windows/win32/api/spatialaudioclient/ne-spatialaudioclient-audioobjecttype
- Ford, H. (2024, February 26). The best gaming headsets in 2024. PC Gamer. Retrieved June 4, 2025, from https://www.pcgamer.com/best-gaming-headset/
- Marks, A. (2009). The complete guide to game audio: For composers, musicians, sound designers, and game developers (2nd ed.). Focal Press. https://www.sciencedirect.com/book/9780240810744/the-complete-guide-to-game-audio
- Grand View Research. (2023). Gaming headset market size, share & trends analysis report. Retrieved June 4, 2025, from https://www.grandviewresearch.com/industry-analysis/headset-market#:~:text=The%20global%20headset%20market%20size%20was%20estimated%20at%20USD%2061.08,USD%20558.89%20billion%20by%202030.
- Microsoft. (n.d.-b). Implementing spatial sound in games. Microsoft Game Dev. Retrieved June 4, 2025, from https://learn.microsoft.com/en-us/windows/win32/coreaudio/spatial-sound
- Microsoft Support. (n.d.). How to turn on spatial sound in Windows. Microsoft Support. Retrieved June 4, 2025, from https://support.microsoft.com/en-us/windows/how-to-turn-on-spatial-sound-in-windows-ca2700a0-6519-448d-5434-56f499d59c96
- Statista. (2023). Augmented reality (AR) and virtual reality (VR) headset shipments worldwide from 2019 to 2027. Retrieved June 4, 2025, from https://www.statista.com/statistics/653390/worldwide-virtual-and-augmented-reality-headset-shipments/#:~:text=In%202022%2C%20the%20number%20of,7.45%20million%20units%20in%202023.
- Valve Corporation. (n.d.-a). Steam hardware & software survey. Retrieved June 4, 2025, from https://store.steampowered.com/hwsurvey/
- Valve Corporation. (n.d.-b). Steam hardware & software survey: VR headsets. Retrieved June 4, 2025, from https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam
- Audiokinetic Inc. (n.d.-d). Wwise Release Notes. Wwise Documentation. Retrieved June 4, 2025, from https://www.audiokinetic.com/en/public-library/2024.1.5_8803/?source=SDK&id=releasenotes.html
- Audiokinetic Inc. (n.d.-b). Authoring Audio Preferences. Wwise Documentation. Retrieved June 4, 2025, from https://www.audiokinetic.com/en/public-library/2024.1.5_8803/?source=Help&id=audio_preferences
- atmoky. (n.d.-a). Spatial Audio Plugins for Wwise, FMOD and Unity. Retrieved June 4, 2025, from https://atmoky.com/products/true-spatial/
- atmoky. (n.d.-b). atmoky - Company. Retrieved June 4, 2025, from https://atmoky.com/company/
- atmoky. (n.d.-c). Overview | atmoky trueSpatial Wwise. atmoky Developer. Retrieved June 4, 2025, from https://developer.atmoky.com/true-spatial-wwise/docs
- Spæs Lab. (n.d.). Atmoky trueSpatial. spæs — lab for spatial aesthetics in sound Berlin. Retrieved June 4, 2025, from https://spaes.org/Atmoky-trueSpatial
- Wenzel, E. M., Arruda, M., Kistler, D. J., & Wightman, F. L. (1993). Localization using nonindividualized head-related transfer functions. The Journal of the Acoustical Society of America, 94(1), 111–123. https://doi.org/10.1121/1.407089
- Microsoft. (n.d.-d). Exclusive Mode Streams. Microsoft Learn. Retrieved June 4, 2025, from https://learn.microsoft.com/en-us/windows/win32/coreaudio/exclusive-mode-streams
- FMOD. (n.d.). Spatial audio. FMOD Documentation. Retrieved June 4, 2025, from https://www.fmod.com/docs/2.02/studio/welcome-to-fmod-studio-new-in-110.html#spatial-audio
- Savioja, L., & Svensson, U. P. (2015). Overview of geometrical room acoustic modeling techniques. The Journal of the Acoustical Society of America, 138(2), 708–730. https://doi.org/10.1121/1.4926438
- Serafin, S., Geronazzo, M., Erkut, C., Nilsson, N. C., & Nordahl, R. (2018). Sonic interactions in virtual reality: State of the art, current challenges, and future directions. IEEE computer graphics and applications, 38(2), 31. https://pubmed.ncbi.nlm.nih.gov/29672254/
- Sweet, M. (2014). Writing interactive music for video games: A composer's guide. Addison-Wesley Professional. https://www.oreilly.com/library/view/writing-interactive-music/9780133563528/
- Broderick, J., Duggan, J., & Redfern, S. (2018, August). The importance of spatial audio in modern games and virtual environments. In 2018 IEEE games, entertainment, media conference (GEM) (pp. 1-9). IEEE. https://ieeexplore.ieee.org/abstract/document/8516445
- Institute of Electronic Music and Acoustics (IEM) Graz. (n.d.). Spatial Audio. Retrieved June 4, 2025, from https://iem.kug.ac.at/en/research/fields-of-research/signal-processing-and-acoustics/spatial-audio
- Schörkhuber, C., Zaunschirm, M., & Höldrich, R. (2018, March). Binaural rendering of ambisonic signals via magnitude least squares. In Proceedings of the DAGA (Vol. 44, pp. 339-342). https://pub.dega-akustik.de/DAGA_2018/data/articles/000301.pdf
- Schörkhuber, C., Zaunschirm, M., & Höldrich, R. (2018). Binaural rendering of Ambisonic signals by head-related impulse response time alignment and a diffuseness constraint. The Journal of the Acoustical Society of America, 143(6), 3616-3627. https://doi.org/10.1121/1.5040489
- Zaunschirm, M., Frank, M., & Zotter, F. (2016). An Interactive Virtual Icosahedral Loudspeaker Array. Presented at DAGA 2016, Aachen. Retrieved from https://ambisonics.iem.at/Members/zotter/2016_zaunschirm_virtualico_daga.pdf

Comments