About Me

Hi, I'm a Ph.D. student in Computer Science at Cornell. My research interests are in machine learning and deep learning, particularly as they apply to vision and robotics. I work with Hod Lipson and the Cornell Creative Machines Lab and sometimes as a visiting student with Yoshua Bengio and the LISA Lab at U. Montreal. My work is supported by a NASA Space Technology Research Fellowship.

Before coming to Cornell, I did my undergrad at Caltech and then worked on estimation at a research-oriented applied math startup for a couple years. Recent projects:

Generative Stochastic Networks (GSN) example

Generative Stochastic Networks

(read the paper) Unsupervised learning of models for probability distributions can be difficult due to intractable partition functions. We introduce a general family of models called Generative Stochastic Networks (GSNs) as an alternative to maximum likelihood. Briefly, we show how to learn the transition operator of a Markov chain whose stationary distribution estimates the data distribution. Because this transition distribution is a conditional distribution, it's often much easier to learn than the data distribution itself. Intuitively, this works by pushing the complexity that normally lives in the partition function into the “function approximation” part of the transition operator, which can be learned via simple backprop. We validate the theory by showing several successful experiments on two image datasets and with a particular architecture that mimics the Deep Boltzmann Machine but without the need for layerwise pretraining.

Endless Forms shapes


Watch the two minute intro video. Users on EndlessForms.com collaborate to produce interesting crowdsourced designs. Since launch, over 4,000,000 shapes have been seen and evaluated by human eyes. This volume of user input has produced some really cool shapes. EndlessForms has received some favorable press. Evolve your own shape »



(read the paper) Many labs work on gait learning research, but since they each use different robotic platforms to test out their ideas, it is hard to compare results between these teams. To encourage greater collaboration between scientists, we have developed Aracna, an open-source, 3D printed platform that anyone can use for robotic experiments.

Cornell Chatbots

AI vs. AI

As part of a class project, Igor Labutov and I cobbled together a speech-to-text + chatbot + text-to-speech system that could converse with a user. We then hooked up two such systems, gave them names (Alan and Sruthi), and let them talk together, producing endless robotic comedy. Somehow the video became popular. There was an XKCD about it, and Sruthi even told Robert Siegel to “be afraid” on NPR. Dress appropriately for the coming robot uprising with one of our fashionable t-shirts.


Gait Learning on QuadraTot

(read the paper) Getting robots to walk is tricky. We compared many algorithms for automating the creation of quadruped gaits, with all the learning done in hardware (read: very time consuming). The best gaits we found were nearly 9 times faster than a hand-designed gait and exhibited complex motion patterns that contained multiple frequencies, yet showed coordinated leg movement. More recent work blends learning in simulation and reality to create even faster gaits.

For more information, see the project website (with video), project Trac, and code on GitHub.

Nevermind all this, just show me the videos!
Or, if you prefer, here's a slightly outdated CV.

Selected Papers and Posters more »

  1. first page of paper
    Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks?. Advances in Neural Information Processing Systems 27 (NIPS '14), pages 3320 - 3328. 8 December 2014.
    See also: earlier arXiv version.
    Oral presentation.
    abstract▾ | bib▾
  2. first page of paper
    Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. arXiv. 6 November 2014.
    See also: arXiv page.
    abstract▾ | bib▾
  3. first page of paper
    Yoshua Bengio, Éric Thibodeau-Laufer, Guillaume Alain, Jason Yosinski. Deep Generative Stochastic Networks Trainable by Backprop. Proceedings of the International Conference on Machine Learning. 21 June 2014.
    See also: supplemental section, earlier arXiv versions.
    abstract▾ | bib▾
  4. first page of paper
    Jason Yosinski and Hod Lipson. Visually Debugging Restricted Boltzmann Machine Training with a 3D Example. Presented at Representation Learning Workshop, 29th International Conference on Machine Learning. 1 July 2012.
    abstract▾ | bib▾
  5. first page of paper
    Jeff Clune, Jason Yosinski, Eugene Doan, and Hod Lipson. EndlessForms.com: Collaboratively Evolving Objects and 3D Printing Them. Proceedings of the 13th International Conference on the Synthesis and Simulation of Living Systems. 21 July 2012.
    Winner of Best Poster award.
    abstract▾ | bib▾
  6. first page of paper
    Sara Lohmann, Jason Yosinski, Eric Gold, Jeff Clune, Jeremy Blum and Hod Lipson. Aracna: An Open-Source Quadruped Platform for Evolutionary Robotics. Proceedings of the 13th International Conference on the Synthesis and Simulation of Living Systems. 19 July 2012.
    Winner of Best Presentation award.
    abstract▾ | bib▾
  7. first page of paper
    Jason Yosinski, Jeff Clune, Diana Hidalgo, Sarah Nguyen, Juan Cristobal Zagal, and Hod Lipson. Evolving Robot Gaits in Hardware: the HyperNEAT Generative Encoding Vs. Parameter Optimization. Proceedings of the 20th European Conference on Artificial Life, Paris, France. pp 890-897. 8 August 2011.
    abstract▾ | bib▾

Google scholar | see all 16 papers and posters »

Selected Pressmore »

XKCD: AI Sept 7, 2011

BBC: First 'chatbot' conversation ends in argument
(Video interview with Igor and I)
Sept 8, 2011

New Scientist: One Percent: Evolve your own objects for 3D printing
(on homepage)
19 August 2011

Through the Wormhole with Morgan Freeman: Through the Wormhole with Morgan Freeman: Are Robots the Future of Human Evolution? See my walking robots from 7:00 - 7:45 and 9:40 - 11:10. (Season 4, episode 7. unreliable video link) July 10, 2013

see more press »