WEKO3
アイテム
Model-based estimation of subjective values using choice tasks with probabilistic feedback
http://hdl.handle.net/2237/27094
http://hdl.handle.net/2237/270944c6e5a8f-eae1-4ab9-8188-5d74d2c13ab8
名前 / ファイル | ライセンス | アクション |
---|---|---|
Katahira_JMP2017.pdf ファイル公開日:2019/08/01 (1.7 MB)
|
|
Item type | 学術雑誌論文 / Journal Article(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2017-11-14 | |||||
タイトル | ||||||
タイトル | Model-based estimation of subjective values using choice tasks with probabilistic feedback | |||||
言語 | en | |||||
著者 |
Katahira, Kentaro
× Katahira, Kentaro× Yuki, Shoko× Okanoya, Kazuo |
|||||
アクセス権 | ||||||
アクセス権 | open access | |||||
アクセス権URI | http://purl.org/coar/access_right/c_abf2 | |||||
権利 | ||||||
言語 | en | |||||
権利情報 | © 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ | |||||
キーワード | ||||||
主題Scheme | Other | |||||
主題 | Subjective value | |||||
キーワード | ||||||
主題Scheme | Other | |||||
主題 | Model-based estimation | |||||
キーワード | ||||||
主題Scheme | Other | |||||
主題 | Reinforcement learning | |||||
キーワード | ||||||
主題Scheme | Other | |||||
主題 | Choice behavior | |||||
キーワード | ||||||
主題Scheme | Other | |||||
主題 | Random feedback | |||||
抄録 | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | Evaluating the subjective value of events is a crucial task in the investigation of how the brain implements the value-based computations by which living systems make decisions. This task is often not straightforward, especially for animal subjects. In the present paper, we propose a novel model-based method for estimating subjective value from choice behavior. The proposed method is based on reinforcement learning (RL) theory. It draws upon the premise that a subject tends to choose the option that leads to an outcome with a high subjective value. The proposed method consists of two components: (1) a novel behavioral task in which the choice outcome is presented randomly within the same valence category and (2) the model parameter fit of RL models to the behavioral data. We investigated the validity and limitations of the proposed method by conducting several computer simulations. We also applied the proposed method to actual behavioral data from two rats that performed two tasks: one manipulating the reward amount and another manipulating the delay of reward signals. These results demonstrate that reasonable estimates can be obtained using the proposed method. | |||||
言語 | en | |||||
出版者 | ||||||
出版者 | Elsevier | |||||
言語 | en | |||||
言語 | ||||||
言語 | eng | |||||
資源タイプ | ||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||
資源タイプ | journal article | |||||
出版タイプ | ||||||
出版タイプ | AM | |||||
出版タイプResource | http://purl.org/coar/version/c_ab4af688f83e57aa | |||||
DOI | ||||||
関連タイプ | isVersionOf | |||||
識別子タイプ | DOI | |||||
関連識別子 | https://doi.org/10.1016/j.jmp.2017.05.005 | |||||
ISSN | ||||||
収録物識別子タイプ | PISSN | |||||
収録物識別子 | 0022-2496 | |||||
書誌情報 |
en : Journal of Mathematical Psychojogy 巻 79, p. 29-43, 発行日 2017-08 |
|||||
著者版フラグ | ||||||
値 | author | |||||
URI | ||||||
識別子 | http://doi.org/10.1016/j.jmp.2017.05.005 | |||||
識別子タイプ | DOI | |||||
URI | ||||||
識別子 | http://hdl.handle.net/2237/27094 | |||||
識別子タイプ | HDL |