|
|
|
|
|
|
I recently finished my Ph.D. in computer science studying multi-agent reinforcement learning (lying at the intersection of machine learning and game theory) under Dr. Isbell and Dr. Weiss. My dissertation focused on algorithms for computing how agents should act in the presence of other agents who are very smart but are not necessarily infinitely smart (rational). Thus I consider myself more part of the multi-agent learning community than the game theory or operation research communities, although the overlap is substantial. My work targets approximate optimality as opposed to regret or safety-value approaches. My work addresses both complete and partially observable problems as well as both cooperative and general-sum problems.
I am currently working at Google on the Gmail spam filter where my efforts have shifted away from multi-agent learning and more towards traditional scalable machine learning.
|
|
|
|
NIPS2013_104.pdf |
Liam MacDermed, Charles L. Isbell
Point Based Value Iteration with Optimal Belief Compression for Dec-POMDPs.
NIPS, 2013.
|
View the poster! |
QPACE.pdf |
Liam MacDermed, Karthik S. Narayan, Charles L. Isbell, Lora Weiss.
Quick Polytope Approximation of All Correlated Equilibria in Stochastic Games.
AAAI, 2011.
|
View the poster! |
NIPS2009_1185.pdf |
Liam Mac Dermed, Charles Isbell.
Solving Stochastic Games
NIPS, 2009.
|
View the poster! |
2008-nc-binding.pdf |
Leon Barrett, Jerome Feldman, and Liam Mac Dermed.
A (somewhat) new solution to the variable binding problem
Neural Computation, 2008. |
|
|
|
|
|
|
|
|
|
|
|
|