Η συζήτηση με τις απόψεις των κατασκευαστών βρίσκεται
εδώ:
Εμένα μου φαίνεται ότι και οι ίδιοι οι κατασκευαστές ψάχνονται και υπάρχει πολύς δρόμος ακόμα ώσπου να κατανοηθούν πλήρως οι αλληλεπιδράσεις. Μέχρι τότε θα λέει ο ένας το μακρύ του κι' ο άλλος το κοντό του. Η πιο λογική και η μόνη άλλωστε υπόθεση είναι τα σφάλματα στο χρονισμό. Ποιές απ' όλες τις εμπλεκόμενες διαδικασίες επιδρούν κι'
ως ποιό βαθμό δεν είναι ακόμα επακριβώς γνωστό.
Εντυπωσιακή η δήλωση των κατασκευαστών του JPlay. Ο ήχος αλλάζει με διαφορετικό user interface ακόμα κι' όταν χρησιμοποιείται το ίδιο plug-in ως η κύρια μηχανή αναπαραγωγής!
Οι απόψεις πάντως των κατασκευαστών του Purw Music αξίζουν την προσοχή μας.
Παραθέτω όλη τη συζήτηση για διευκόλυνση.
What makes one media player sound different from another?
Jonathan Reichbach, President, Sonic Studio (Amarra)Most can agree that each software application sounds different. Each has a different "fingerprint" with regards to how they interact with the hardware and software in the computer and how it sounds. For great sound we find that the quality of the audio processing for gain, dither, and EQ all contribute to a different sound. Another determining factor is how the data (music) is read from the disk drive and processed. As Amarra uses the dedicated SSE we are able to optimize every aspect to fine tune your sound. Apple's Core Audio is much more general purpose and this comes with certain tradeoffs that can effect sound quality.
Damien Plisson, Founder, Audirvana, (Audirvana Plus)The audio signal path can be very different, with different level of optimizations. Not all include full optimized path down to the kernel CoreAudio implementation itself. The computer activity synchronous to the audio signal needs to be tightly controlled, especially for the purity of the bass frequencies.
Tim Murison, Co-Founder & CTO, BitPerfect Sound Inc., (BitPerfect)If we're talking only bit perfect playback then I'd argue that the efficiency of getting music to the DAC is the main determinant of sound quality.
Stephen F. Booth, Founder and Developer, sbooth.org, (Decibel)I think that obvious variations in sound are due to differences in DSP applied by the player. For example, if software sample rate conversion is being performed the sound will be different based on which sample rate converter is being used and how it is configured. If digital volume is being used, the sound will be different based on how the gain is applied. For lossy file formats like MP3, smaller differences in sound can be caused by the type of decoder that is used. Each of these small differences adds up to a different sonic signature for each player.
Jussi Laako, Owner, Signalyst, (HQ Player)From my perspective, different processing (upsampling, dithering, etc) algorithms.There are also hardware dependent differences due to different software architectures of players. But this is less deterministic and smaller difference.
Another is digital volume control. Background noise of good quality DAC is so low that using digital volume control for adjusting volume within normal listening volume range is completely feasible. With DSD there are not even "bits to lose", DSD even works better when it's not pushed to the max.
Josef Piri & Marcin Ostapowicz, JPlay (JPlay)It would seem that almost everything can have an effect: for example, GUI. JPLAY plugins are available for JRiver, iTunes and foobar2000. In theory all these plugins should sound the same because the same JPLAY playback engine (which operates as completely isolated Windows service) is used. Yet, each plugin sounds different! GUI is just one example. There are many other factors like memory management, output method (direct sound, wasapi, asio or kernel streaming), buffering etc - all of which do not modify music bits in any way and yet have an effect on sound quality.
Jim Hillegass, Founder and CEO, JRiver (JRiver Media Center)The way it addresses the sound device may not be bitperfect. It may not be capable of upsampling or playing high sample rate files at their native bit depth.
Dr. Rob Robinson, Director of Engineering, Channel D, (Pure Music)This sort of question is impossible to answer without seeing the “innards” of all considered players, and comparing them. It is kind of like asking a question comparing high performance automobiles without taking a look under the hood. However, I can speak about the design of our products.I have discussed signal flow and streamlined algorithm design elsewhere, see for example the 2010 Advances In Computer Audio panel session video on the
Rocky Mountain Audio Fest website and Jim Smith’s
Get Better Sound website interview, among other places. There is no single or main factor or difference, it is a combination of many small things in the design, tied together in the implementation. On a high level, for example, the memory play feature that circumvents the need to access the music storage media / hard drive during music playback; the dithered volume control that reduces distortion caused by the word length reduction of a digital attenuator, and the upsampling feature which can deliver better performance. But these are comprised of many tiny pieces that must work together.
There are many ways of implementing a Memory Play feature, and we have probably tried all of them over the course of programming audio software on the Mac OS over the years. Circular buffer, double buffer; these easy to use and traditional techniques for handling streamed data have their place and are used in our other products. But we have diverged from the usual approach in Pure Music / Pure Vinyl and don’t use them, because a no-compromise music player allows an unconstrained choice of design approaches.
Then there are the not so concrete aspects of the design. This is part of our IP “sweat equity” that came about as a result of understanding best programming practices for smooth data throughput and instruction execution, developed over 27 years of experience writing software for the Apple Macintosh platform. This included writing driver level code to support custom hardware for specialized laboratory instrumentation.
Developing driver level code requires you to carefully contemplate the effect of small changes you might make. The timing of instruction execution and data transfers is critical. If you are designing a word processor or database application, those factors doesn’t matter much, if at all. But real-time processing such as audio playback is a different matter.
It’s a bit like designing a boat hull to carve through the water with the minimum of turbulence and drag. A small design change can cause an unexpected result (usually for the worse). Driver-level code has to run fast and smooth, and the same design principles apply to any real-time programming task such as audio player software. We want to create the minimum amount of “turbulence” (for example in the form of spikes in CPU usage) during the time when music is being played, parceling tasks to the CPU as smoothly as possible. As one example, computer power draw fluctuates with CPU usage, and these fluctuations can pollute power supply ground references, contributing to digital jitter (as I explained in the 2010 video referenced above). A steady-state, “non- turbulent” design circumvents this problem. Understanding driver design gives the insight needed for player software that uses the CPU and computer resources as smoothly and efficiently as possible.