Is it time to stop worrying and love AI? Read a balanced view of how AI can improve the world. Link.
Training more general networks with procedural level generation, generating progressively harder levels is an improvement to current reinforcement techniques that overfit. Link.
Most common neural net mistakes from Andrej Karpathy (Director of Tesla AI). Link.
You didn’t try to overfit a single batch first.
You forgot to toggle train/eval mode for the net.
You forgot to .zero_grad() (in pytorch) before .backward().
You passed softmaxed outputs to a loss that expects raw logits.
You didn’t use bias=False for your Linear/Conv2d layer when using BatchNorm, or conversely forget to include it for the output layer .This one won’t make you silently fail, but they are spurious parameters
Thinking view() and permute() are the same thing (& incorrectly using view
Adversarial Reprogramming of Neural Networks — a new goal for adversarial attacks that reprogram the target model to perform a task chosen by the attacker. Link.
Gradient acceleration in activation functions — a deeper look at dropout and a discussion of a new technique. Link.