Contact Information

Garrett Ethan Katz, Ph.D.
Assistant Professor
Deptartment of Electrical Engineering & Computer Science
CST 4-189, Syracuse University
(315) 443-3565
gkatz01@syr.edu

Curriculum Vitae
Google Scholar
Personal site

I teach and research various topics in artificial intelligence, cognitive modeling, machine learning, neural computation, optimization, and robotics. My lab focuses on "vertically-integrated" AI, from low-level sensorimotor control up through high-level cognition and reasoning. Below are some representative research projects, followed by recently taught courses.

Research

Automated Algorithm Discovery
The goal of this project is to develop methods that can automatically design new algorithms, with improved optimality or complexity properties relative to known algorithms. As a starting point we have focused on state-space puzzles such as Rubik's cube, where the "algorithms" take the form of rule tables which partition the state space into subsets and specify which action sequence to perform in each subset. We formulate a multi-objective optimization problem to simultaneously minimize the size of the rule table (complexity) and the length of solution paths it induces (optimality). Our optimization method uses hypervolume scalarization in conjunction with a Monte-Carlo backtracking search.

Neural Virtual Machines
This work aims to design neural networks that can represent, and emulate execution of, traditional computer programs in symbolic languages. The networks can be trained on algorithmic tasks from scratch, or fine-tuned after "compiling" human-authored programs into initial weights. We have applied our method to algorithmic list processing tasks as well as robotics and automated planning algorithms. The basis of our technique is fast, gated associative weight changes, using a novel "store-erase" weight update rule that emulates (over)writing contents of random-access memory. Our analysis found that arbitrary, unlimited updates can be made while maintaining bounded weights (shown on the left for the 3D case) and correct emulation of random-access memory.

Robotic imitation learning
This line of research involves full- and upper-body humanoid robots that can learn by imitating human teachers. We developed a framework called CERIL that uses Cause-Effect Reasoning to do Imitation Learning. It copies the goals of the demonstrator rather than their actions, enabling generalization to novel situations where object positions and properties may be different from what was seen in demonstration.
Numerical methods for neural network analysis
We have developed various numerical methods to better analyze and understand neural network activation and learning dynamics. For example, we introduced directional fibers (pictured on the left), mathematical objects that can be numerically traversed to enumerate many distinct solutions to systems of non-linear equations. They may be applied to enumerate fixed points of recurrent neural networks and other dynamical systems, or stationary points of objective functions. We also devised a predictor-corrector method to numerically traverse the loss level-sets of neural networks, in order to analyze the variation in regularization among parameter vectors with equal training loss.
Cryo-electron Microscopy
In a previous research project we applied Bayesian inference techniques to cryo-electron micrographs of biological virus particles, to determine their 3D structure and understand their molecular machinery.

Teaching

CIS 700: Special Topics
PhD-level special topics courses, focused on reading, presenting, and reproducing research articles in a recent research area. Recent course topics include deep learning approaches to program representation, induction, and synthesis, and deep learning approaches to automated theorem proving.

CIS 667: Introduction to Artificial Intelligence
Graduate-level introductory course on Artificial Intelligence. Covers tree search algorithms (e.g., iterative deepening, A*, minimax); probabilistic modeling (e.g., maximum-likelihood estimation, expectation maximization, hidden Markov models), reinforcement learning (e.g. Markov decision processes, policy iteration, tabular temporal-difference Q-learning, policy gradient), basics of neural networks and gradient descent, and automated reasoning methods such as forward chaining, unification, and resolution.

ECS102: Introduction to Computing
Undergraduate course on introductory programming, using the Python language. Covers data types, literals, variables, control flow, libraries and packages, automated testing, and basics of object-oriented programming.