Sifted | Article by Maija Palmer 16 Feb 2022
The metaverse still has a fair way to go before it looks or feels anything like the “real” world. Avatars still feel cartoonish. Backdrops often look a bit blocky and fantastical. And — crucially — the metaverse doesn’t always sound right.
Spatial sound — hearing noises grow and fade as you “move” around a virtual space — is one of the elements that makes a virtual world feel real. And it’s finally getting some attention.
The metaverse is alive with the sound of music
Music has become a big focus for metaverse platforms. In January, Warner Music Group announced a deal with cryptocurrency-based metaverse platform The Sandbox to create a musical theme park and concert venue on the platform.
WMG also invested in virtual concert platform Wave last year, and virtual concerts have taken over Fortnite, Minecraft and Roblox. Roblox is signing its own music industry deals and hosts listening parties where artists can debut new music.
“Sound is a key element of the metaverse to get right. It’s on the radar of all the major tech providers and we’ve also seen plenty of startups solving different problems here,” says Dave Haynes, one of the early team at Soundcloud, and now a metaverse investor at FOV Ventures.
But making these experiences really immersive is more complicated than just plugging in a pair of headphones and getting Snoop Dogg to host a session in his virtual mansion.
“Sound is a key element of the metaverse to get right. It’s on the radar of all the major tech providers”
“If you walk into a room, just from looking around, you will have a pretty good idea of what the echo should sound like,” says Jon Olive, CTO and cofounder at MagicBeans, a spatial audio company that is also working with Warner Music, among other clients. “If I’m simulating that room I need to represent the sound in a way that meets your expectation of what’s going to happen.”
Spatial audio is also one of the essential features of virtual reality office meetings.
“One of the magical things about being at an event or meetup in VR (vs Zoom or similar) is being able to break out into smaller discussion groups, or overhear something in another corner of the room and be able to wander over and drop into the conversation. Just like you’d do in real life,” says Haynes.
A virtual concert, where the music sounds different depending on how close to a particular section you are standing, is a completely different experience from the “sound bubble” of perfectly balanced instrumentation when you listen to a recording on your stereo.
“An accurate and robust understanding of the diverse sounds around us will be central to our ability to create human-centred experiences in the metaverse,” says Andrew Williamson, managing partner at Cambridge Innovation Capital and one of the investors looking closely at metaverse sound startups.
Making the metaverse feel “real”
Sound has the ability to make the metaverse feel “real” sooner than the visuals. The visual side of the metaverse is still embryonic — and while there are more than 16m VR headsets now in circulation, according to Statista, that’s nothing compared to the number of headphones and mobile phones that are kicking around.
Mobile phones have motion sensors and webcams can sense where your head is. MagicBean is exploring ways to use these to give people a sense of immersive sound — for example being able to walk around a virtual orchestra to get different sound experience depending on where you are standing.
But doing spatial audio is complicated. To accurately represent all the ways sound bounces off the surfaces in a particular room — changing subtly as you move around — requires intense computing power. MagicBeans’ secret sauce has been to strip back to just the essentials so that the calculations can be done on even a mobile phone.
“What you want to portray is something that your brain is willing to treat as real,” says Olive. “It turns out that it’s not a horrendous amount of maths. It’s within the capabilities of modern mobile devices to do what you need to do.”
Big tech companies like Meta and Microsoft have been investing in spatial audio for some time. Meta (formerly Facebook) acquired Scottish spacial audio company Two Big Ears back in 2016, and Oculus, the VR headset company owned by Facebook, has been working internally on solutions.
But as more companies want to jump into the metaverse, they will need to partner up with spacial sound companies. In the US a number of companies like Mach1, Spatial, High Fidelity and Skilled Creative are reinventing sound for the metaverse. But which are the European companies to watch?
This Cambridge-based startup uses sophisticated algorithms to separate out interfering voices and sounds, and lets users focus on the sounds they’re trying to hear. Today that ability is hugely important for improving the sound quality of video calls and voice-controlled devices. It’s also a game-changer for the hard of hearing. However, as the way we communicate with each other and our devices evolves, and we start to play in a world of augmented reality, AudioTelligence’s tech will come into its own.
The company raised a $8.5m Series A fund in 2020 from Cambridge Innovation Capital, Cambridge Enterprise, Octopus Ventures and CEDAR Audio.
Audio Analytic, also headquartered in Cambridge, has spent the last 10 years developing technology that allows headphones and other audio equipment to hear and respond to the soundscape around them. For example, your headphones would adjust to the background noise of a busy airport drowning out real hubbub while still listening out for the tannoy announcement for your flight.
The company raised a $12m Series B round in 2019 from investors including IQ Capital, Cambridge Innovation Capital and National Grid Partners.
Another University of Cambridge spinout, PolyAI, is tackling one of the most challenging speech problems — making voice assistants sound like real humans. PolyAI provides its customers with conversational AI tools that help to automate their customer service function. The underlying technology is described as “multi-turn conversational AI” and puts the latest generation of pre-trained deep learning models into a real-world product.
The company completed a $14m Series B investment round in September 2021, bringing in investors including Amadeus Capital Partners, Passion Capital, Khosla Ventures, Point72 Ventures, Sands Capital and Entrepreneur First.
The University of Southampton spinout delivers controlled and independent sound beams direct to the left and right ears. It creates “virtual headphones” around the user and delivers spatial audio which simulates sounds positioned in 3D space using its proprietary head tracking technology.
The company raised a £1.5m round in 2020 from investors Foresight Group, Williams Advanced Engineering and IP Group.
The London-based company has developed an audio engine that allows full volumetric audio to be experienced at home on everyday devices, and in large-scale multi-user location-based experiences. You can fill a room with music and even superimpose sounds onto physical objects.
MagicBeans is so far bootstrapped.
All the big Hollywood studios use the 3D audio software tools created by this Portuguese company founded by Nuno Fonseca in 2016. Sound Particles builds up immersive sound effects by creating thousands of points of sound.
The tech was used in recent films and television shows including Dune, Game of Thrones, Frozen II or Star Wars: The Rise of Skywalker. It is also being used by games companies like Blizzard (recently bought by Microsoft) and Epic Games (notably on Fortnite), and also by the music industry.
Sound Particles has so far raised just $1.8m in grant and seed funding.
Metaverse sounds could also be about playing with how your avatar sounds. Valencia-based startup Voicemod, for example, has been providing voice modifications for PC gamers for years and now wants to do the same for avatars in the metaverse. Want your avatar to sound like Barry White? Voicemod can fix that for you.
The company raised an $8m Series A round from Bitkraft Ventures in 2020.
Maija Palmer is Sifted’s innovation editor. She covers deeptech and corporate innovation, and tweets from @maijapalmer