Contact Information

Garrett Ethan Katz, Ph.D.
Assistant Professor
Deptartment of Electrical Engineering & Computer Science
CST 4-189, Syracuse University
(315) 443-3565
gkatz01@syr.edu

Curriculum Vitae
Google Scholar
Personal site

I teach and research various topics in artificial intelligence, cognitive modeling, machine learning, neural computation, optimization, and robotics. My lab focuses on "vertically-integrated" AI, from low-level sensorimotor control up through high-level cognition and reasoning. Below are some representative research projects, followed by recently taught courses.

Research

Single-pass Learning
The goal of this project is to develop single-pass learning rules with the efficiency of classical associative memory but the high capacity of iterative methods. As a starting point we have established an impossibility result that a certain family of rules for the linear threshold neuron (which includes Hebbian learning and backpropagation as special cases) can not be both single-pass and full-capacity.


Robotic imitation learning and motion planning
This line of research involves full- and upper-body humanoid robots that can learn by imitating human teachers. We developed a framework called CERIL that uses Cause-Effect Reasoning to do Imitation Learning. It copies the goals of the demonstrator rather than their actions, enabling generalization to novel situations where object positions and properties may be different from what was seen in demonstration.
Automated Algorithm Discovery
The goal of this project is to develop methods that can automatically design new algorithms, with improved optimality or complexity properties relative to known algorithms. As a starting point we have focused on state-space puzzles such as Rubik's cube, where the "algorithms" take the form of rule tables which partition the state space into subsets and specify which action sequence to perform in each subset. We formulate a multi-objective optimization problem to simultaneously minimize the size of the rule table (complexity) and the length of solution paths it induces (optimality). Our optimization method uses hypervolume scalarization in conjunction with a Monte-Carlo backtracking search.

Neural Virtual Machines
This work aims to design neural networks that can represent, and emulate execution of, traditional computer programs in symbolic languages. The networks can be trained on algorithmic tasks from scratch, or fine-tuned after "compiling" human-authored programs into initial weights. We have applied our method to algorithmic list processing tasks as well as robotics and automated planning algorithms. The basis of our technique is fast, gated associative weight changes, using a novel "store-erase" weight update rule that emulates (over)writing contents of random-access memory. Our analysis found that arbitrary, unlimited updates can be made while maintaining bounded weights (shown on the left for the 3D case) and correct emulation of random-access memory.

Numerical methods for neural network analysis
We have developed various numerical methods to better analyze and understand neural network activation and learning dynamics. For example, we introduced directional fibers (pictured on the left), mathematical objects that can be numerically traversed to enumerate many distinct solutions to systems of non-linear equations. They may be applied to enumerate fixed points of recurrent neural networks and other dynamical systems, or stationary points of objective functions. We also devised a predictor-corrector method to numerically traverse the loss level-sets of neural networks, in order to analyze the variation in generalization error among parameter vectors with equal training loss.
Cryo-electron Microscopy
In a previous research project we applied Bayesian inference techniques to cryo-electron micrographs of biological virus particles, to determine their 3D structure and understand their molecular machinery.

Advising

Current Ph.D. students

Naveed Tahir works on constrained deep learning with applications in clustering and network pruning.
. Akshay works on robotic manipulation and adversarial learning.
Xulin Chen works on robust reinforcement learning and legged robotic locomotion.
Borui He works on robot perception and predictive modeling.
Ruipeng Liu works on single-pass learning and vector-symbolic architectures.

Former advisees

Teaching

CIS 700: Special Topics
PhD-level special topics courses, focused on reading, presenting, and reproducing research articles in a recent research area. Recent course topics include deep learning approaches to neurosymbolic programming, automated theorem proving, and robotic motion planning.

CIS 467/667: Introduction to Artificial Intelligence
Undergraduate (467) and graduate (667) level introductory courses on Artificial Intelligence. Topics include tree search algorithms (e.g., iterative deepening, A*, minimax); probabilistic modeling (e.g., maximum-likelihood estimation, expectation maximization, hidden Markov models), reinforcement learning (e.g. Markov decision processes, policy iteration, tabular temporal-difference Q-learning, policy gradient), basics of neural networks and gradient descent, and automated reasoning methods such as forward chaining, unification, and resolution.

CIS375: Introduction to Discrete Mathematics
Undergraduate course on discrete mathematics with an emphasis on mathematical proofs. Covers propositional and first-order logic, set theory, functions and relations, partially ordered sets, recursively defined sets, and mathematical/structural induction.

ECS102/CIS151: Fundamentals of Computing
Undergraduate course on introductory programming, using the Python language. Covers data types, literals, variables, control flow, libraries and packages, automated testing, and basics of object-oriented programming.