文件名称:MDP_policy
下载
别用迅雷、360浏览器下载。
如迅雷强制弹出,可右键点击选“另存为”。
失败请重下,重下不扣分。
如迅雷强制弹出,可右键点击选“另存为”。
失败请重下,重下不扣分。
介绍说明--下载内容均来自于网络,请自行研究使用
马可夫决策过程,适合最优策略选择,可以作为一个工具包调用(The Markoff decision process, suitable for the optimal strategy selection, can be called as a toolkit)
(系统自动生成,下载前可以参看下载内容)
下载文件列表
文件名 | 大小 | 更新时间 |
---|---|---|
MDPtoolbox | 0 | 2018-03-15 |
MDPtoolbox\AUTHORS | 63 | 2009-11-10 |
MDPtoolbox\COPYING | 1563 | 2009-11-10 |
MDPtoolbox\documentation | 0 | 2018-03-09 |
MDPtoolbox\documentation\arrow.gif | 231 | 2009-11-10 |
MDPtoolbox\documentation\BIA.png | 6876 | 2009-11-10 |
MDPtoolbox\documentation\DOCUMENTATION.html | 3110 | 2009-11-10 |
MDPtoolbox\documentation\index_alphabetic.html | 6469 | 2009-11-10 |
MDPtoolbox\documentation\index_category.html | 7022 | 2009-11-10 |
MDPtoolbox\documentation\INRA.png | 134450 | 2009-11-10 |
MDPtoolbox\documentation\mdp_bellman_operator.html | 3276 | 2009-11-10 |
MDPtoolbox\documentation\mdp_check.html | 2959 | 2009-11-10 |
MDPtoolbox\documentation\mdp_check_square_stochastic.html | 2465 | 2009-11-10 |
MDPtoolbox\documentation\mdp_computePpolicyPRpolicy.html | 3357 | 2009-11-10 |
MDPtoolbox\documentation\mdp_computePR.html | 2885 | 2009-11-10 |
MDPtoolbox\documentation\mdp_eval_policy_iterative.html | 7506 | 2009-11-10 |
MDPtoolbox\documentation\mdp_eval_policy_matrix.html | 2984 | 2009-11-10 |
MDPtoolbox\documentation\mdp_eval_policy_optimality.html | 3691 | 2009-11-10 |
MDPtoolbox\documentation\mdp_eval_policy_TD_0.html | 3363 | 2009-11-10 |
MDPtoolbox\documentation\mdp_example_forest.html | 6834 | 2009-11-10 |
MDPtoolbox\documentation\mdp_example_rand.html | 3833 | 2009-11-10 |
MDPtoolbox\documentation\mdp_finite_horizon.html | 4160 | 2009-11-10 |
MDPtoolbox\documentation\mdp_LP.html | 3245 | 2009-11-10 |
MDPtoolbox\documentation\mdp_policy_iteration.html | 4940 | 2009-11-10 |
MDPtoolbox\documentation\mdp_policy_iteration_modified.html | 4966 | 2009-11-10 |
MDPtoolbox\documentation\mdp_Q_learning.html | 4185 | 2009-11-10 |
MDPtoolbox\documentation\mdp_relative_value_iteration.html | 7733 | 2009-11-10 |
MDPtoolbox\documentation\mdp_span.html | 2082 | 2009-11-10 |
MDPtoolbox\documentation\mdp_value_iteration.html | 6731 | 2009-11-10 |
MDPtoolbox\documentation\mdp_value_iterationGS.html | 8689 | 2009-11-10 |
MDPtoolbox\documentation\mdp_value_iteration_bound_iter.html | 3580 | 2009-11-10 |
MDPtoolbox\documentation\mdp_verbose_silent.html | 2446 | 2009-11-10 |
MDPtoolbox\documentation\meandiscrepancy.jpg | 16285 | 2009-11-10 |
MDPtoolbox\mdp_bellman_operator.m | 3523 | 2009-11-10 |
MDPtoolbox\mdp_check.m | 4038 | 2009-11-10 |
MDPtoolbox\mdp_check_square_stochastic.m | 2266 | 2009-11-10 |
MDPtoolbox\mdp_computePpolicyPRpolicy.m | 2932 | 2009-11-10 |
MDPtoolbox\mdp_computePR.m | 2886 | 2009-11-10 |
MDPtoolbox\mdp_eval_policy_iterative.m | 5948 | 2009-11-10 |
MDPtoolbox\mdp_eval_policy_matrix.m | 3549 | 2009-11-10 |
MDPtoolbox\mdp_eval_policy_optimality.m | 4203 | 2009-11-10 |
MDPtoolbox\mdp_eval_policy_TD_0.m | 5080 | 2009-11-10 |
MDPtoolbox\mdp_example_forest.m | 4729 | 2009-11-10 |
MDPtoolbox\mdp_example_rand.m | 3837 | 2009-11-10 |
MDPtoolbox\mdp_finite_horizon.m | 4169 | 2009-11-10 |
MDPtoolbox\mdp_LP.m | 3843 | 2009-11-10 |
MDPtoolbox\mdp_policy_iteration.m | 5544 | 2009-11-10 |
MDPtoolbox\mdp_policy_iteration_modified.m | 5669 | 2009-11-10 |
MDPtoolbox\mdp_Q_learning.m | 5407 | 2009-11-10 |
MDPtoolbox\mdp_relative_value_iteration.m | 5199 | 2009-11-10 |
MDPtoolbox\mdp_silent.m | 1736 | 2009-11-10 |
MDPtoolbox\mdp_span.m | 1707 | 2009-11-10 |
MDPtoolbox\mdp_value_iteration.m | 6794 | 2009-11-10 |
MDPtoolbox\mdp_value_iterationGS.m | 7435 | 2009-11-10 |
MDPtoolbox\mdp_value_iteration_bound_iter.m | 5074 | 2009-11-10 |
MDPtoolbox\mdp_verbose.m | 1755 | 2009-11-10 |
MDPtoolbox\README | 2437 | 2009-11-10 |