TensorFlow Research Models
Last updated
Was this helpful?
Last updated
Was this helpful?
This folder contains machine learning models implemented by researchers in . The models are maintained by their respective authors. To propose a model for inclusion, please submit a pull request.
Currently, the models are compatible with TensorFlow 1.0 or later. If you are running TensorFlow 0.12 or earlier, please .
: protecting communications with
adversarial neural cryptography.
: semi-supervised sequence learning with
adversarial training.
: a model for real-world image text
extraction.
: Models and supporting code for use with
.
: various autoencoders.
: Program synthesis with reinforcement learning.
:
implementation of a spatial memory based mapping and planning architecture
for visual navigation.
: compressing and decompressing images using a
pre-trained Residual GRU network.
: deep labelling for semantic image segmentation.
: deep local features for image matching and retrieval.
: differential privacy for training
data.
: domain separation networks.
: generative adversarial networks.
: image-to-text neural network for image captioning.
: deep convolutional networks for computer vision.
: a
large-scale life-long memory module for use in deep learning.
: a
meta-learned unsupervised learning update rule.
: a distributed model for noun compound relationship
classification.
: sequential variational autoencoder for analyzing
neuroscience data.
: language modeling on the one billion word benchmark.
: text generation with GANs.
: recognize and generate names.
: highly parallel neural computer.
: neural network augmented with logic
and mathematic operations.
: probabilistic future frame
synthesis via cross convolutional networks.
: localizing and identifying multiple
objects in a single image.
: code for several reinforcement learning algorithms,
including Path Consistency Learning.
: perspective transformer nets for 3D object reconstruction.
: module networks for question answering on knowledge graphs.
: density estimation using real-valued non-volume
preserving (real NVP) transformations.
: low-variance, unbiased gradient estimates for discrete
latent variable models.
: deep and wide residual networks.
: recurrent neural network sentence-to-vector
encoder.
: image classification models in TF-Slim.
: identify the name of a street (in France) from an image
using a Deep RNN.
: the Swivel algorithm for generating word embeddings.
: neural models of natural language syntax.
: Self-supervised representation learning from multi-view video.
: sequence-to-sequence with attention model for text
summarization.
: spatial transformer network, which allows the
spatial manipulation of data within the network.
: predicting future video frames with
neural advection.