Pitch Synchronous Overlap-Add (PSOLA) is a widely used technique for altering pitch and/or time scale of a sound signal. It can be implemented efficiently enough to run on an embedded system. Depending on the type of signals, it can also offer very convincing results.
A new algorithm of converting two-channel audio materials to five-channel based on subband unsupervised adaptive filtering is proposed in this paper. This algorithm uses a subband analysis-processing-synthesis framework. In each subband, a robust stereo image is obtained using principle component analysis, and an effective energy re-distribution among surround channels is achieved by mapping cross-correlation between two input channels to a weighted panning matrix.
There are many problems related to the management ofmusical data that have not yet been solved. These arenow being extensively considered in the ﬁeld of music in-formation retrieval (MIR). Topics that should be includedwithin the scope of this discussion include the problem ofautomatically classifying musical instrument sounds andmusical phrases/styles, music representation and index-ing, estimating similarity of music using both perceptualand musicology criteria, problems of recognizing musicusing audio or semantic description, building up musicaldatabases, evaluation of MIR systems, intellectual prop-erty right issues, user interfaces, issues related to musi-cal styles and genres, language modeling for music, userneeds and expectations, auditory scene analysis, gesturecontrol over musical works, and others. Some of topicscontained within the notion of MIR are covered by theMPEG-7 standard, which deﬁnes description of the mul-timedia content in order to support better interpretation ofinformation. It should be stressed that solving these prob-lems needs human assistance and management. Continue reading
Explaining the different types of modulation effects that we have available for mixing, and then move along to gates, compression, EQ, delay, reverb, de-essing, and a whole lot more.
Google Android to support class-compliant USB audio interfaces.
When it comes to audio performance, Android mobile devices have been a few steps behind their Apple counterparts. Android’s audio engine wasn’t initially optimised and latency figures were markedly higher than for iOS. Certain companies like Sonoma Wire Works have written software or worked around the limitations of the OS so you can now do audio based things like play synths, use DJ tools etc. without crippling latency. That’s the first hurdle cleared. The second, and more vital limitation with Android, especially for recording, was its inability to play nicely with third-party audio interfaces, which you’d need to get higher quality audio in and out of the device.
USB headset and speaker owners will love these next few commits:
- b13d9ef : Enable multi-format usb audio output for Hammerhead 
- 03de93f : Enabling USB capture for Hammerhead 
- df82f27 : Add loudness enhancer effect in the default configuration file 
Finally, the Nexus 5 will have support for USB Audio —both output andinput . And once you plug your awesome USB audio equipment into your Nexus 5, you’ll be happy to find a new Loudness Enhancer  DSP mode mixed in amongst the stock audio presets.
Android phones and tablets already make pretty decent portable media players. But some of the best mobile apps for recording or creating audio are still iOS-only. That’s at least partly bbecause Apple’s smartphone and tablet operating system supports low-latency audio processing, something that’s been missing from Android… until now.
The next version of Google Android is due out this fall, and the Android L Developer Preview is already available. And it’ll include a number of audio enhancements, including support for real-time audio processing.
SEE your music like never before in the color-coded waveform with a dynamic brightness. Get it HERE.
The most common visual representation of audio is its waveform display, which is a graph of amplitude over time. It indicates when the audio is loud or soft, but provides no information about how the audio sounds. With WaveColor, the display is colored to represent the frequency content to make sounds more visible. This requires extraction of frequency information from the audio signal and an appropriate mapping of this information to the color space. Ideally, the coloring is independent of recording level, and similar sounds are represented by similar colors. Also the loudness (not amplitude) is reflected by the dynamic brightness.
I made this to make my life as an audio DSP researcher and algorithm developer easier. The source code is part of the MAPL (Music Analysis and Processing Library). You can find the latest version at github.
All you have to do is the implement a sub class of the Processor and fill in the dataset. This tool will generate nice plots for you. You can also set parameters to control the processor.
Realtime, polyphonic pitch-shifting never sounded so good on guitar—the Bomber is like having a vibrato tailpiece on a pedal, except that your strings don”t break, they stay in tune when you shift the pitch, and you don”t ever have to take your fingers off them. Sure, the dive-bombing function is outrageous, and you can get amazingly cool steel guitar-type sounds, but the really big deal here is the quality of the sound—the designers apparently checked Darth Vader and the Munchkins at the door.
It is an April Fool’s Day prank. Can’t believe someone actually believed it. PSW Staff…