An important open research question in Music Information Retrieval (MIR) is the extraction of sound objects (e.g. a guitar chord, the beat of a bass drum, the bark of a dog) from polyphonic audio. Recent theoretical advances in mathematical signal processing indicate the possibility of a decisive improvement in identifying and extracting sound objects directly from the time-frequency plane. These mined sound objects can then automatically be organized into perceptually meaningful and easy to navigate sound libraries using the latest innovations in MIR based music similarity. Together this will form the core of a powerful new toolkit for audio manipulation with widespread applicability in fields like Sound Design, Computational Auditory Scene Analysis, Artefact Reduction or Audio Database Organisation.