文件名称:1
下载
别用迅雷、360浏览器下载。
如迅雷强制弹出,可右键点击选“另存为”。
失败请重下,重下不扣分。
如迅雷强制弹出,可右键点击选“另存为”。
失败请重下,重下不扣分。
介绍说明--下载内容均来自于网络,请自行研究使用
bp_innerloop.m
Inner loop of the backpropagtion learning algorithm.
One hidden layer. Uses tanh as the transfer function.
Uses the following global variables for input and/or output:
Inputs1 - input patterns
Desired - desired output patterns
LearnRate - learning rate parameter
Momentum - momentum parameter
DerivIncr - increment to the derivative of the transfer function
(Fahlman s trick typical value 0.2)
Weights1 - first weight layer (updated by this routine)
Weights2 - second weight layer (updated by this routine)
deltaW1 - initialize to 0 before first call
deltaW2 - initialize to 0 before first call
TSS - total sum-squared error (set by this routine)
Recurrent state- bp_innerloop.m
Inner loop of the backpropagtion learning algorithm.
One hidden layer. Uses tanh as the transfer function.
Uses the following global variables for input and/or output:
Inputs1 - input patterns
Desired - desired output patterns
LearnRate - learning rate parameter
Momentum - momentum parameter
DerivIncr - increment to the derivative of the transfer function
(Fahlman s trick typical value 0.2)
Weights1 - first weight layer (updated by this routine)
Weights2 - second weight layer (updated by this routine)
deltaW1 - initialize to 0 before first call
deltaW2 - initialize to 0 before first call
TSS - total sum-squared error (set by this routine)
Recurrent state
Inner loop of the backpropagtion learning algorithm.
One hidden layer. Uses tanh as the transfer function.
Uses the following global variables for input and/or output:
Inputs1 - input patterns
Desired - desired output patterns
LearnRate - learning rate parameter
Momentum - momentum parameter
DerivIncr - increment to the derivative of the transfer function
(Fahlman s trick typical value 0.2)
Weights1 - first weight layer (updated by this routine)
Weights2 - second weight layer (updated by this routine)
deltaW1 - initialize to 0 before first call
deltaW2 - initialize to 0 before first call
TSS - total sum-squared error (set by this routine)
Recurrent state- bp_innerloop.m
Inner loop of the backpropagtion learning algorithm.
One hidden layer. Uses tanh as the transfer function.
Uses the following global variables for input and/or output:
Inputs1 - input patterns
Desired - desired output patterns
LearnRate - learning rate parameter
Momentum - momentum parameter
DerivIncr - increment to the derivative of the transfer function
(Fahlman s trick typical value 0.2)
Weights1 - first weight layer (updated by this routine)
Weights2 - second weight layer (updated by this routine)
deltaW1 - initialize to 0 before first call
deltaW2 - initialize to 0 before first call
TSS - total sum-squared error (set by this routine)
Recurrent state
(系统自动生成,下载前可以参看下载内容)
下载文件列表
elman_bp_innerloop.m
elman_bp_innerloop.m~
elman_test.m
elman_test.m~
elman_train.m
elman_train.m~
forwprop.m