SCS Undergraduate Thesis Topics
|Carl Doersch||Tai Sing Lee||Temporal Continuity Learning for Deep Belief Networks|
Deep Belief Networks (DBNs) have proven successful for the unsupervised learning of representations of images, and the subsequent use of these representations for object recognition. However, much work remains before DBNs will be able to learn representations as robust as those in Inferotemporal (IT) Cortex, the object recognition center of the human brain. Neurons in IT are selective to particular objects, but the representations are relatively invariant to changes in, for example, object pose or lighting. Physiological evidence suggests that IT neurons learn these representations by assuming Temporal Continuity of visual objects: if an object is present in the visual field at time t, then the same object will most likely be present at time t+1. Thus, if the neural representation changes slowly with time, then the neurons are likely to be responding to objects rather than spurious features. The goal of this research is to show that this constraint, that representations should change slowly with time, can be used train the representations in DBNs, and that the resulting representations will be more useful for object recognition.