Since March, I am now independently contracting to a number of companies doing music and audio DSP and machine learning. This is a great opportunity for me to work with and advise a number of really great companies, particularly small startups, on commercial applications of audio signal processing, machine learning, information retrieval, cloud infrastructure. You can check my biography for more details.
I have the joy of owning an Ibanez IMG-2010 Guitar Synth, which can be had quite cheap on Ebay, yet are excellent quality Steinberger style guitars, and originally sold for quite a princely sum. Wayne Joness' very informative GR-300 site extolls the qualities of this beast in great detail. I'm yet to do the conversion to a DB-25 pin connector and fit the G-202 hex fuzz circuit, but it's planned.
I've had an Apple Airport Express 1st Generation, 802.11g model A1084 since new, c. 2004. This has long been superseded by newer versions, and for sometime it was just doing duty for me as a USB print server, not as a router. However, it seems that there is a bug introduced around v6.2 of the firmware that would cause it to go offline when configured to "join wireless network". Restarting the AE would allow it to run, but it would soon drop off the net. It became particularly troubling as it would become unresponsive almost as soon as it was configured, barely even allowing a single print job to be sent. It's not clear what the cause is or where exactly the bug may lie.
I'm woefully late in pointing this out, but there is now a video done by Matt Hines and Jay Leboeuf explaining MediaMined.
Probing neural mechanisms of music perception, cognition, and performance using multivariate decoding
Psychomusicology: Music, Mind and Brain, 22(2):168–174, 2012
Recent neuroscience research has shown increasing use of multivariate decoding methods and machine learning. These methods, by uncovering the source and nature of informative variance in large data sets, invert the classical direction of inference that attempts to explain brain activity from mental state variables or stimulus features. However, these techniques are not yet commonly used among music researchers. In this position article, we introduce some key features of machine learning methods and review their use in the field of cognitive and behavioral neuroscience of music. We argue for the great potential of these methods in decoding multiple data types, specifically audio waveforms, electroen- cephalography, functional MRI, and motion capture data. By finding the most informative aspects of stimulus and performance data, hypotheses can be generated pertaining to how the brain processes incoming musical information and generates behavioral output, respectively. Importantly, these methods are also applicable to different neural and physiological data types such as magnetoencephalography, near-infrared spectroscopy, positron emission tomography, and electromyography.
Proceedings of the 12th International Conference on Music Perception and Cognition, page 943, Thessaloniki, Greece, July 2012. ICMPC/ESCOM. (abstract).
A software system, MediaMined, is described for the efficient analysis and classification of auditory signals. This system has been applied to the tasks of musical instrument identification, classifying musical genre, distinguishing between music and speech, and detection of the gender of human speakers. For each of these tasks, the same algorithm is applied, consisting of low-level signal analysis, statistical processing and perceptual modeling for feature extraction, and then supervised learning of sound classes. Given a ground truth dataset of audio examples, textual descriptive classification labels are then produced. Such labels are suitable for use in automating content interpretation (auditioning) and content retrieval, mixing and signal processing. A multidimensional feature vector is calculated from statistical and perceptual processing of low level signal analysis in the spectral and temporal domains. Machine learning techniques such as support vector machines are applied to produce classification labels given a selected taxonomy. The system is evaluated on large annotated ground truth datasets (n > 30000) and demonstrates success rates (F-measures) greater than 70% correct retrieval, depending on the task. Issues arising from labeling and balancing training sets are discussed. The performance of classification of audio using machine learning methods demonstrates the relative contribution of bottom-up signal derived features and data oriented classification processes to human cognition. Such demonstrations then sharpen the question as to the contribution of top-down, expectation based processes in human auditory cognition.
I'm a big fan of urban biking, having had the privilege of living in Amsterdam and Paris which are both very bike friendly cities for daily commuting. Returning back to NYC, I got a cast off commuter bike from a neighbour, that after considerable work, I got in working order. Given drivers in NYC are nowhere near as aware of cyclists as European drivers, lights are essential. I fitted the crappy old "Fairway Flyer" with a Dutch magneto (aka "dynamo" although that term is strictly incorrect) generator and 6VAC old school chrome light. In practice, the drag on wheel isn't enough to notice given the entire bike is hardly built for speed.
If you are a budding music or audio engineering undergraduate student, iZotope is hiring paid interns. The work mostly consists of auditioning our systems.