Inspirational Machine Learning Papers from Other Fields

(a very brief selection of the flood of research in vision/voice/etc)

Factored Spatial and Spectral Multichannel Raw Waveform CLDNNS

Learning the Speech Front-end With Raw Waveform CLDNNs

Both of these works on CLDNNs are an excellent example of how learning from raw time series waveforms, building recurrent time series representations, and then mapping to supervised targets can be done with a CLDNN architecture.

Spatial Transformer Networks

A powerful method for end to end training of localization networks in the image domain by fitting 2D Affine transform parameters through a regression task prior to the discriminative task.

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

Generative models for acoustic time-series

WaveNet: A Generative Model for Raw Audio

Google’s WaveNet model for time series acoustic voice synthesis

Radio Machine Learning Papers 

Communication System PHY Learning

Learning to Communicate: Channel Auto-encoders, Domain Specific Regularizers, and Attention

The fundamental construct of the channel autoencoder for learning to communicate digital information across wireless channels.  Learning new physical layer encodings which work as well as traditional expert-defined PHYs but are now continuously adaptive and potentially much lower complexity.

Modulation Recognition

Convolutional Radio Modulation Recognition Networks

Supervised Learning of modulation types directly on radio time series data using a deep convolutional neural network.

Semi-Supervised Radio Signal Identification

This work looks at extending purely supervised models into the semi-supervised or purely unsupervised areas, since the world generally isn’t labeled nicely for us.

Fundamental Structure Learning

Unsupervised Representation Learning of Structured Radio Communications Signals

Unsupervised sparse representation learning using a convolutional denoising autoencoder on radio time series data for compression, reconstruction, and compact representation.

Radio Transformer Networks: Attention Models for Learning to Synchronize in Wireless Systems

Initial work in building radio attention models to provide some degree of synchronization and canonical form signal inputs to the convolutional classification network.

Sequence Model Learning

 End-to-End Radio Traffic Sequence Recognition with Deep Recurrent Neural Networks

High level protocol traffic sequence classification based on low level PHY sample data representations through feature learning.

Recurrent Neural Radio Anomaly Detection

Looking at traditional reconstruction based anomaly detection on time series datasets with Kalman predictors replaced by NN based sequence prediction on wideband RF datasets.

Radio Control System Learning

Deep Reinforcement Learning Radio Control and Signal Detection with KeRLym, a Gym RL Agent

Learning to optimize radio control and behavior using deep reinforcement learning techniques in Keras.  Info about the KeRLym framework for DeepRL using the agent and environment interfaces from OpenAI Gym