Traditionally, medical discoveries have been made by observing associations, designing experiments to test these hypotheses, and subsequently, applying more advanced modeling techniques to better quantify these associations. However, this process can fail for complex associations, particularly for images and signals. For example, hand-engineering features to describe a “normal blood vessel” in an image of a retina is challenging because of the wide variety of pixel values and shapes in real patient images. By contrast, deep learning, a machine learning technique that learns its own features, has been tremendously successful in identifying objects in natural scenes, such as cats in a variety of backgrounds and postures. In this paper, we present a case study of discovering new knowledge in retina images using deep learning models trained on retina images from over 280,000 patients, and validated on two independent datasets with 12,026 and 999 patients respectively. We show that retina images alone contain sufficient information to predict previously known associations to an unprecedented accuracy, such as age to within 3.3 years. In addition, our models are able to predict previously unknown associations, such as gender with an AUC of 0.97, smoking status with an AUC of 0.72, ethnicity with a kappa exceeding 0.6, systolic blood pressure within 11.23 mmHg, and HbA1c within 1.39%%. We further show that our models used distinct aspects of the anatomy to generate each prediction, such as the optic disc or blood vessels, opening avenues of further research.