Is it time to stop worrying and love AI? Read a balanced view of how AI can improve the world. Link.

Ablation studies are an important way to determine causality in deep learning. Figuring out “Did the model understand the question”? Link

Realistic Music Generation at scale. Deep Mind releases a new white-paper on modelling raw audio at scale. Link. Samples.

Generating images from natural language descriptions which was presented at CVPR. Link. Code sample.

Training more general networks with procedural level generation, generating progressively harder levels is an improvement to current reinforcement techniques that overfit. Link.

The AI-ON Project releases an updated for few-shot distribution learning for music generation. Link.

Most common neural net mistakes from Andrej Karpathy (Director of Tesla AI). Link.

You didn’t try to overfit a single batch first.
You forgot to toggle train/eval mode for the net.
You forgot to .zero_grad() (in pytorch) before .backward().
You passed softmaxed outputs to a loss that expects raw logits.
You didn’t use bias=False for your Linear/Conv2d layer when using BatchNorm, or conversely forget to include it for the output layer .This one won’t make you silently fail, but they are spurious parameters
Thinking view() and permute() are the same thing (& incorrectly using view

Neural networks applied to super low-cost OpenMVCam computer vision cameras. Link.

Adversarial Reprogramming of Neural Networks — a new goal for adversarial attacks that reprogram the target model to perform a task chosen by the attacker. Link.

Gradient acceleration in activation functions — a deeper look at dropout and a discussion of a new technique. Link.

Posted 
Jul 2, 2018
 in 
Technology
 category

More from 

Technology

 category

View All