top banner top banner
index
RegularArticles
ReplicationStudies
SpecialIssues
Vignettes
EditorialBoard
Instructions4Authors
JournalGuidelines
Messages
Submission

Search publications

Computational modeling of behavioral tasks: An illustration on a classic reinforcement learning paradigm

Full text PDF
Bibliographic information: BibTEX format RIS format XML format APA style
Cited references information: BibTEX format APA style
Doi: 10.20982/tqmp.17.2.p105

Suthaharan, Praveen , Corlett, Philip R. , Ang, Yuen-Siang
105-140
Keywords: Computational modeling , reinforcement learning , two-armed bandit , parameter estimation , maximum likelihood estimation , maximum a posteriori , expectation-maximization
Tools: R
(no sample data)   (Appendix)

There has been a growing interest among psychologists, psychiatrists and neuroscientists in applying computational modeling to behavioral data to understand animal and human behavior. Such approaches can be daunting for those without experience. This paper presents a step-by-step tutorial to conduct parameter estimation in R via three techniques: Maximum Likelihood Estimation (MLE), Maximum A Posteriori (MAP) and Expectation-Maximization with Laplace approximation (EML). We first demonstrate how to simulate a classic reinforcement learning paradigm -- the two-armed bandit task -- for N = 100 subjects; and then explain how to develop the computational model and implement the MLE, MAP and EML methods to recover the parameters. By presenting a sufficiently detailed walkthrough on a familiar behavioral task, we hope this tutorial could benefit readers interested in applying parameter estimation methods in their own research.


Pages © TQMP;
Website last modified: 2024-03-28.
Template last modified: 2022-03-04 18h27.
Page consulted on .
Be informed of the upcoming issues with RSS feed: RSS icon RSS