Prior knowledge recommended: NLP, RNNs, LSTMs, attention.

Introduction

RNN-based architectures are relevant and popular choices for a large variety of machine learning tasks. In particular, tasks like speech recognition, natural language processing, video processing, anomaly detection, time-series prediction, and others involving sequential data require a nuanced modeling approach that calls for remembering previous states and synthesizing context surrounding an element’s position in a sequence. This cutting-edge research field has birthed a plethora of compelling advancements and in this blog post I will go over 2 papers encompassing several key features that make RNN-based architectures so powerful. Specifically, I will go over…

Nandini Krishnaswamy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store