Ongoing research in my laboratory provides evidence that the ability to perform fine-grained acoustic analysis in the tens-of-millisecond range during infancy appears to be one of the most powerful and significant predictors of subsequent language development and disorders. Our prospective, longitudinal research has shown that non-linguistic, spectrotemporally modulated, rapid auditory processing (RAP) skills in the first year of life can serve as a behavioral "marker" of developmental language impairments and thus are of particular utility in the early identification and proactive remediation of such disorders. In this talk a brief summary will be given of studies that demonstrate that difficulties in discriminating rapidly successive sensory events early in infancy are predictive of later language outcome. The main focus will be on findings from our baby-friendly, non-invasive behavioral intervention that specifically impacts acoustic mapping. These results demonstrate that interactive exposure to specific classes of non-linguistic, temporally-modulated sounds in early infancy engages ongoing experience-dependent processes, supporting development of more efficient, fine-grained auditory processing skills -- thus optimizing acoustic mapping and automatic processing well before expressive language emerges. Moreover active training with non-speech stimuli translates to improved processing of speech. Next steps include facilitating active technology transfer of such interactive techniques with the goal of providing “real-world” intervention solutions at the earliest stages of development.