Implicit Deep Learning

Principal Investigator(s): 
Stella Yu

Most modern neural networks are defined explicitly as a sequence of layers with various connections.  Any desired property such as translational equivariance needs to be hard-coded into the architecture, which is inflexible and restrictive.  In contrast, implicit models are defined as a set of constraints to satisfy or criteria to optimize at the test time.  This framework can help express a large class of operations such as test-time optimization, planning, dynamics, constraints, and feedback.  Our research explores implicit models to integrate invariance and equivariance constraints in computer vision applications.