Yamaha introduced Surround:AI with their previous-generation receivers. I had a RX-A3080 with me to test for a month – not provided by Yamaha. While I appreciated the clean beautiful sound of the receiver – Surround:AI did not leave me impressed. Read on to see why!
What is Surround:AI?
I have written at length about Yamaha’s Cinema DSP and why I like it. All of the modes – but especially the movie modes – are well-researched and well-implemented and they work really well in re-creating a movie-theatre-like experience to literally make the walls disappear.
Surround:AI uses the same CinemaDSP modes and tries to dynamically switch between them and dynamically configure them based on what is happening in the soundtrack.
What it also tries to do is focus on the dialogue by highlighting it from the mix and turn down the reverb / echo on the dialogue. Admittedly, this is a useful thing to do.
What’s the Issue?
While this may sound like a great idea in theory, the implementation left me feeling like the soundtrack was getting continuously altered by the algorithm in ways that I felt messed with the integrity of the original soundtrack, its tonality and steering especially.
This is in contrast to Yamaha’s normal DSP modes, such as Sci-Fi or even the new Enhanced DSP program developed for object-bases soundtracks. With those “simple” programs engaged, the soundtrack never lost the original steering or tone, but the mix was actually made clearer and its different parts more distinct – allowing it a lot more room to breathe.
Any Way to Fix It?
I actually feel like Yamaha may have overstepped the line a little with Surround:AI and I would advise they go back to the drawing board or at least give us some configuration options. For example, allow us to adjust the strength, speed and whether steering is affected. It may even help to force Surround:AI to work within the confines of one or two traditional DSP programs so it doesn’t seem so schizophrenic.
One incredibly useful feature however is the highlighting of the dialogue track by trying to separate it from the mix! If Yamaha was able to remove reverb and echo from the dialogue track while applying dialogue emphasis based on user settings for the traditional DSP modes, it would make those modes a lot more useful.
This is because the traditional DSP modes such as Sci Fi apply reverb and echo to the centre channel – and therefore the dialogue as well. It can make the dialogue harder to hear or muddier – especially in a non-treated room. Using AI to remove this muddiness – without messing with the balance of the rest of the soundtrack – would be incredibly useful!
For now, I would recommend purists – and those wanting to do critical listening – stay with the traditional DSP programs – my personal favourite still being Sci-Fi. The only time I feel Surround:AI might be of benefit is for late-night listening: it makes action more discrete and easier to follow and it does make it easier to focus on the dialogue. Of course that is all at the expense of soundtrack integrity which is not nearly ad critical for late-night listening and therefore late night processing modes.