default search action
Mark K. Ho
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c29]Ruiqi He, Carlos G. Correa, Tom Griffiths, Mark K. Ho:
Structurally Guided Task Decomposition in Spatial Navigation Tasks (Student Abstract). AAAI 2024: 23512-23513 - [c28]Maya Malaviya, Mark K. Ho:
Teaching Functions with Gaussian Process Regression. AAAI Spring Symposia 2024: 562-564 - [c27]David Abel, Mark K. Ho, Anna Harutyunyan:
Three Dogmas of Reinforcement Learning. RLC 2024: 629-644 - [i24]Ilia Sucholutsky, Katherine M. Collins, Maya Malaviya, Nori Jacoby, Weiyang Liu, Theodore R. Sumers, Michalis Korakakis, Umang Bhatt, Mark K. Ho, Joshua B. Tenenbaum, Bradley C. Love, Zachary A. Pardos, Adrian Weller, Thomas L. Griffiths:
Representational Alignment Supports Effective Machine Teaching. CoRR abs/2406.04302 (2024) - [i23]David Abel, Mark K. Ho, Anna Harutyunyan:
Three Dogmas of Reinforcement Learning. CoRR abs/2407.10583 (2024) - [i22]Katherine M. Collins, Ilia Sucholutsky, Umang Bhatt, Kartik Chandra, Lionel Wong, Mina Lee, Cedegao E. Zhang, Tan Zhi-Xuan, Mark K. Ho, Vikash Mansinghka, Adrian Weller, Joshua B. Tenenbaum, Thomas L. Griffiths:
Building Machines that Learn and Think with People. CoRR abs/2408.03943 (2024) - 2023
- [j2]Carlos G. Correa, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths:
Humans decompose tasks by trading off utility and computational cost. PLoS Comput. Biol. 19(6) (2023) - [c26]Andi Peng, Aviv Netanyahu, Mark K. Ho, Tianmin Shu, Andreea Bobu, Julie Shah, Pulkit Agrawal:
Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation. ICML 2023: 27630-27641 - [i21]Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy:
Bayesian Reinforcement Learning with Limited Cognitive Load. CoRR abs/2305.03263 (2023) - [i20]Andi Peng, Aviv Netanyahu, Mark K. Ho, Tianmin Shu, Andreea Bobu, Julie Shah, Pulkit Agrawal:
Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation. CoRR abs/2307.06333 (2023) - [i19]Ruiqi He, Carlos G. Correa, Thomas L. Griffiths, Mark K. Ho:
Structurally guided task decomposition in spatial navigation tasks. CoRR abs/2310.02221 (2023) - [i18]Sunayana Rane, Mark K. Ho, Ilia Sucholutsky, Thomas L. Griffiths:
Concept Alignment as a Prerequisite for Value Alignment. CoRR abs/2310.20059 (2023) - [i17]Carlos G. Correa, Sophia Sanborn, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths:
Exploring the hierarchical structure of human plans via program generation. CoRR abs/2311.18644 (2023) - 2022
- [j1]Mark K. Ho, Thomas L. Griffiths:
Cognitive Science as a Source of Forward and Inverse Models of Human Decisions for Robotics and Control. Annu. Rev. Control. Robotics Auton. Syst. 5: 33-53 (2022) - [c25]David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh:
On the Expressivity of Markov Reward (Extended Abstract). IJCAI 2022: 5254-5258 - [c24]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Tom Griffiths, Dylan Hadfield-Menell:
How to talk so AI will learn: Instructions, descriptions, and autonomy. NeurIPS 2022 - [i16]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell:
Linguistic communication as (inverse) reward design. CoRR abs/2204.05091 (2022) - [i15]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Thomas L. Griffiths, Dylan Hadfield-Menell:
How to talk so your robot will learn: Instructions, descriptions, and pragmatics. CoRR abs/2206.07870 (2022) - [i14]Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy:
On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning. CoRR abs/2210.16877 (2022) - [i13]Carlos G. Correa, Mark K. Ho, Frederick Callaway, Nathaniel D. Daw, Thomas L. Griffiths:
Humans decompose tasks by trading off utility and computational cost. CoRR abs/2211.03890 (2022) - 2021
- [c23]Theodore R. Sumers, Mark K. Ho, Robert X. D. Hawkins, Karthik Narasimhan, Thomas L. Griffiths:
Learning Rewards From Linguistic Feedback. AAAI 2021: 6002-6010 - [c22]Yun-Shiuan Chuang, Xuezhou Zhang, Yuzhe Ma, Mark K. Ho, Joseph L. Austerweil, Jerry Zhu:
Using Machine Teaching to Investigate Human Assumptions when Teaching Reinforcement Learners. CogSci 2021 - [c21]Theodore R. Sumers, Robert D. Hawkins, Mark K. Ho, Tom Griffiths:
Extending rational models of communication from beliefs to actions. CogSci 2021 - [c20]Charley M. Wu, Mark K. Ho, Benjamin Kahl, Christina Leuker, Björn Meder, Ralf H. J. M. Kurvers:
Specialization and selective social attention establishes the balance between individual and social learning. CogSci 2021 - [c19]David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh:
On the Expressivity of Markov Reward. NeurIPS 2021: 7799-7812 - [i12]Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan D. Cohen, Thomas L. Griffiths:
Control of mental representations in human planning. CoRR abs/2105.06948 (2021) - [i11]Theodore R. Sumers, Robert X. D. Hawkins, Mark K. Ho, Thomas L. Griffiths:
Extending rational models of communication from beliefs to actions. CoRR abs/2105.11950 (2021) - [i10]Mark K. Ho, Thomas L. Griffiths:
Cognitive science as a source of forward and inverse models of human decisions for robotics and control. CoRR abs/2109.00127 (2021) - [i9]David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh:
On the Expressivity of Markov Reward. CoRR abs/2111.00876 (2021) - 2020
- [c18]Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths:
People Do Not Just Plan, They Plan to Plan. AAAI 2020: 1300-1307 - [c17]Carlos G. Correa, Mark K. Ho, Frederick Callaway, Tom Griffiths:
Resource-rational Task Decomposition to Minimize Planning Costs. CogSci 2020 - [c16]Arunima Sarin, Mark K. Ho, Justin Martin, Fiery Cushman:
Punishment: Incentive or Communication? CogSci 2020 - [c15]Theodore R. Sumers, Mark K. Ho, Tom Griffiths:
Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation. CogSci 2020 - [c14]Charley M. Wu, Natalia Vélez, Mark K. Ho, Robert L. Goldstone:
Cognition, Collectives, and Human Culture. CogSci 2020 - [c13]Guan Wang, Carl Trimbach, Jun Ki Lee, Mark K. Ho, Michael L. Littman:
Teaching a Robot Tasks of Arbitrary Complexity via Human Feedback. HRI 2020: 649-657 - [i8]Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths:
The Efficiency of Human Cognition Reflects Planned Information Processing. CoRR abs/2002.05769 (2020) - [i7]Carlos G. Correa, Mark K. Ho, Fred Callaway, Thomas L. Griffiths:
Resource-rational Task Decomposition to Minimize Planning Costs. CoRR abs/2007.13862 (2020) - [i6]Yun-Shiuan Chuang, Xuezhou Zhang, Yuzhe Ma, Mark K. Ho, Joseph L. Austerweil, Xiaojin Zhu:
Using Machine Teaching to Investigate Human Assumptions when Teaching Reinforcement Learners. CoRR abs/2009.02476 (2020) - [i5]Theodore R. Sumers, Mark K. Ho, Robert X. D. Hawkins, Karthik Narasimhan, Thomas L. Griffiths:
Learning Rewards from Linguistic Feedback. CoRR abs/2009.14715 (2020) - [i4]Theodore R. Sumers, Mark K. Ho, Thomas L. Griffiths:
Show or Tell? Demonstration is More Robust to Changes in Shared Perception than Explanation. CoRR abs/2012.09035 (2020)
2010 – 2019
- 2019
- [c12]Mark K. Ho, Joanna Korman, Tom Griffiths:
The Computational Structure of Unintentional Meaning. CogSci 2019: 1915-1921 - [c11]Carlos G. Correa, Frederick Callaway, Mark K. Ho, Tom Griffiths:
Compositional subgoal representations. CogSci 2019: 3255 - [c10]Micah Carroll, Rohin Shah, Mark K. Ho, Tom Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca D. Dragan:
On the Utility of Learning about Humans for Human-AI Coordination. NeurIPS 2019: 5175-5186 - [i3]Mark K. Ho, Joanna Korman, Thomas L. Griffiths:
The Computational Structure of Unintentional Meaning. CoRR abs/1906.01983 (2019) - [i2]Micah Carroll, Rohin Shah, Mark K. Ho, Thomas L. Griffiths, Sanjit A. Seshia, Pieter Abbeel, Anca D. Dragan:
On the Utility of Learning about Humans for Human-AI Coordination. CoRR abs/1910.05789 (2019) - 2018
- [c9]Mark K. Ho, Michael L. Littman, Fiery Cushman, Joseph L. Austerweil:
Effectively Learning from Pedagogical Demonstrations. CogSci 2018 - [c8]Marcell Vazquez-Chanlatte, Susmit Jha, Ashish Tiwari, Mark K. Ho, Sanjit A. Seshia:
Learning Task Specifications from Demonstrations. NeurIPS 2018: 5372-5382 - 2017
- [c7]Mark K. Ho, Michael L. Littman, Joseph L. Austerweil:
Teaching by Intervention: Working Backwards, Undoing Mistakes, or Correcting Mistakes? CogSci 2017 - [c6]James MacGlashan, Mark K. Ho, Robert Tyler Loftin, Bei Peng, Guan Wang, David L. Roberts, Matthew E. Taylor, Michael L. Littman:
Interactive Learning from Policy-Dependent Human Feedback. ICML 2017: 2285-2294 - [i1]James MacGlashan, Mark K. Ho, Robert Tyler Loftin, Bei Peng, David L. Roberts, Matthew E. Taylor, Michael L. Littman:
Interactive Learning from Policy-Dependent Human Feedback. CoRR abs/1701.06049 (2017) - 2016
- [c5]Mark K. Ho, James MacGlashan, Amy Greenwald, Michael L. Littman, Elizabeth Hilliard, Carl Trimbach, Stephen Brawner, Josh Tenenbaum, Max Kleiman-Weiner, Joseph L. Austerweil:
Feature-based Joint Planning and Norm Learning in Collaborative Games. CogSci 2016 - [c4]Max Kleiman-Weiner, Mark K. Ho, Joseph L. Austerweil, Michael L. Littman, Josh Tenenbaum:
Coordinate to cooperate or compete: Abstract goals and joint intentions in social interaction. CogSci 2016 - [c3]Mark K. Ho, Michael L. Littman, James MacGlashan, Fiery Cushman, Joseph L. Austerweil:
Showing versus doing: Teaching by demonstration. NIPS 2016: 3027-3035 - 2015
- [c2]Mark K. Ho, Michael L. Littman, Fiery Cushman, Joseph L. Austerweil:
Teaching with Rewards and Punishments: Reinforcement or Communication? CogSci 2015 - 2013
- [c1]Mark K. Ho, Fiery Cushman:
Working Memory and Abstract Representation in the Context of Culture. CogSci 2013
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-13 23:52 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint