2018年前向人工神经网络敏感性研究-文档资料.ppt

上传人:吴起龙 文档编号:1892986 上传时间:2019-01-19 格式:PPT 页数:37 大小:376KB
返回 下载 相关 举报
2018年前向人工神经网络敏感性研究-文档资料.ppt_第1页
第1页 / 共37页
2018年前向人工神经网络敏感性研究-文档资料.ppt_第2页
第2页 / 共37页
2018年前向人工神经网络敏感性研究-文档资料.ppt_第3页
第3页 / 共37页
2018年前向人工神经网络敏感性研究-文档资料.ppt_第4页
第4页 / 共37页
2018年前向人工神经网络敏感性研究-文档资料.ppt_第5页
第5页 / 共37页
点击查看更多>>
资源描述

《2018年前向人工神经网络敏感性研究-文档资料.ppt》由会员分享,可在线阅读,更多相关《2018年前向人工神经网络敏感性研究-文档资料.ppt(37页珍藏版)》请在三一文库上搜索。

1、一. 引言 1. 前向神经网络(FNN)介绍 神经元 离散型:自适应线性元(Adaline) 连续型:感知机(Perceptron) 神经网络 离散型:多层自适应线性网(Madaline ) 连续型:多层感知机(BP网或MLP) 1 问题 硬件精度对权的影响 环境噪音对输入的影响 动机 参数的扰动对网络会产生怎样影响? 如何衡量网络输出偏差的大小? 2. 研究提出 2 建立网络输出与网络参数扰动之间的关系 分析该关系,揭示网络的行为规律 量化网络输出偏差 3. 研究内容 3 指导网络设计,增强网络抗干扰能力 度量网络性能,如容错和泛化能力 研究其它网络课题的基础,如网络结构的 裁剪和参数的挑选

2、等 4. 研究意义 4 1. Madaline的敏感性 n维几何模型(超球面) M. Stevenson, R. Winter, and B. Widrow, “Sensitivity of Feedforward Neural Networks to Weight Errors,” IEEE Trans. on Neural, Networks, vol. 1, no. 1, 1990. 统计模型(方差) S. W. Pich, “The Selection of Weight Accuracies for Madalines,” IEEE Trans. on Neural Networks

3、, vol. 6, no. 2, 1995. 二二. .研究纵览研究纵览(典型方法和文献文献 ) 5 分析方法(偏微分) S. Hashem, “Sensitivity Analysis for Feed- Forward Artificial Neural Networks with Differentiable Activation Functions”, Proc. of IJCNN, vol. 1, 1992. 统计方法(标准差) J. Y. Choi & C. H. Choi, “Sensitivity Ana- lysis of Multilayer Perceptron with

4、 Differ- entiable Activation Functions,” IEEE Trans. on Neural Networks, vol. 3, no. 1, 1992. 2. MLP的敏感性 6 输入属性筛选 J. M. Zurada, A. Malinowski, S. Usui, “Perturbation Method for Deleting Redundant Inputs of Perceptron Networks”, Neurocomputing, vol. 14, 1997. 网络结构裁减 A. P. Engelbrecht, “A New Pruning

5、Heuristic Based on Variance Analysis of Sensitivity Information”, IEEE Trans. on Neural Networks, vol. 12, no. 6, 2001. 3. 敏感性的应用 7 J.L. Bernier et al, “A Quantitive Study of Fault Tolerance, Noise Immunity and Generalization Ability of MLPs,” Neural Computation, vol. 12, 2000. 容错和泛化问题 8 三. 研究方法 1.

6、自底向上方法 单个神经元 整个网络 2. 概率统计方法 概率(离散型) 均值(连续型) 3. n-维几何模型 超矩形的顶点(离散型) 超矩形体(连续型) 9 四.已获成果(代表性论文) 敏感性分析: “Sensitivity Analysis of Multilayer Percep- tron to Input and Weight Perturbations,” IEEE Trans. on Neural Networks, vol. 12, no.6, pp. 1358-1366, Nov. 2001. 10 敏感性量化: “A Quantified Sensitivity Measur

7、e for Multi- layer Perceptron to Input Perturbation,” Neural Computation, vol. 15, no. 1, pp. 183-212, Jan. 2003. 11 隐层节点的裁剪(敏感性应用): “Hidden Neuron Pruning for Multilayer Perceptrons Using Sensitivity Measure,” Proc. of IEEE ICMLC2002, pp. 1751-1757, Nov. 2002. 输入属性重要性的判定(敏感性应用): “Determining the Re

8、levance of Input Features for Multilayer Perceptrons,” Proc. of IEEE SMC2003, Oct. 2003. 12 五. 未来工作 进一步完善已有的结果,使之更加实用 放松限制条件 扩大分析范围 精确量化计算 进一步应用所得的结果,解决实际问题 探索新方法,研究新类型的网络 13 结束 谢谢各位! 14 15 16 17 18 19 Effects of input & weight deviations on neurons sensitivity Sensitivity increases with input and w

9、eigh deviations, but the increase has an upper bound. 20 Effects of input dimension on neurons sensitivity There exists an optimal value for the dimension of input, which yields the highest sensitivity value. 21 Effects of input & weight deviations on MLPs sensitivity Sensitivity of an MLP increases

10、 with the input and weight deviations. 22 Effects of the number of neurons in a layer Sensitivity of MLPs: n-2-2-1 | 1n 10 to the dimension of input. 23 Sensitivity of MLPs: 2-n-2-1 | 1n 10 to the number of neurons in the 1st layer. 24 Sensitivity of MLPs: 2-2-n-1 | 1n 10 to the number of neurons in

11、 the 2nd layer . There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The nearer a layer to the output layer is, The more effect the number of neurons in the layer has. 25 Effects of the number of layers Sensitivity of MLPs:2-1,2-2-1,2-2-2-2

12、-2-2-2-2-2-2-1 to the number of layers. Sensitivity decreases with the number increasing, and the decrease almost levels off when the number becomes large. 26 Sensitivity of the neurons with 2-dimensional input 27 Sensitivity of the neurons with 3-dimensional input 28 Sensitivity of the neurons with

13、 4-dimensional input 29 Sensitivity of the neurons with 5-dimensional input 30 Sensitivity of the MLPs: 2-2-1, 2-3-1,2-2-2-1 31 Simulation 1 (Function Approximation) Implement an MLP to approximate the function: where Implementation considerations The MLP architecture is restricted to 2-n-1. The con

14、vergence condition is MES-goal=0.01&Epoch105. The lowest trainable number of hidden neurons is n=5. The pruning processes start with MLPs of 2-5-1 and stop at an architecture of 2-4-1. The relevant data used by and resulted from the pruning process are listed in Table 1 and Table 2. 32 TABLE 1. Data

15、 for 3 MLPs with 5 hidden neurons to realize the function MLP 2-5-1 EpochMSE (training)MSE (testing)Trained weights and bias MSE-(goal=0.01 & epoch=100000) Sensitivity Relevance 1 30586 0.000999816 0.0117005 -12.9212 -0.2999 33.7943 -34.6057 31.4768 -31.0169 -0.5607 -0.8140 1.1737 -1.1026 -5.4507 12

16、.7341 -13.0816 -12.0171 8.7152 bias=0 0.031794 0.002272 0.001406 0.027066 0.001815 0.1733 0.0289 0.0184 0.3253 0.0158 2 65209 0.000999959 0.0124573 32.6223 -33.3731 -0.7361 0.7202 -31.8412 31.2399 -15.1872 -0.0937 -0.3989 -1.0028 11.9959 -15.4905 12.2103 -6.0877 -12.5057 bias=0 0.002176 0.000463 0.0

17、01821 0.031017 0.027068 0.0261 0.0072 0.0222 0.1888 0.3385 3 26094 0.000999944 0.0120354 -15.0940 17.6184 -19.9163 21.4109 -14.0535 -0.8460 1.0263 -0.1258 26.7757 -26.1259 8.8172 -18.6532 -6.8307 16.8506 -10.4671 bias=0 0.013547 0.006661 0.026220 0.028352 0.002324 0.1194 0.1242 0.1791 0.4777 0.0243

18、33 TABLE 2. Data for the 3 pruned MLPs with 4 hidden neurons to realize the function MLP 2-4-1 EpochMSE (training)MSE (testing) Retrained weights and bias (goal=0.01 & epoch=100000) SensitivityRelevance 1 (Obtained by removing the 5th neuron from the MLP of 2-5-1) 2251 0.000999998 0.0114834 -14.4387

19、 -0.7003 34.8366 -35.6080 33.1285 -32.6271 -1.5065 0.0184 -5.7036 13.0579 -13.2457 -12.1803 bias=4.2349 0.027014 0.002100 0.001460 0.031343 0.1541 0.0274 0.0193 0.3818 2 (Obtained by removing the 2nd neuron from the MLP of 2-5-1) 1945 0.000999921 0.0119645 33.5805 -34.2727 -32.9313 32.3172 -15.8016

20、-0.5610 -1.3318 0.0103 12.6267 12.7961 -6.1782 -13.3652 bias=-7.9468 0.001954 0.001800 0.026902 0.029283 0.0247 0.0230 0.1662 0.3914 3 (Obtained by removing the 5th neuron from the MLP of 2-5-1) 13253 0.000999971 0.011926 -34.3974 33.8148 -34.3250 34.7990 -1.2909 0.0198 11.8097 0.8879 15.7984 -15.65

21、03 -12.9606 6.0722 bias=-1.4194 0.001637 0.001316 0.028834 0.028122 0.0259 0.0206 0.3737 0.1708 34 Simulation 2 (Classification) Implement an MLP to solve the XOR problem: 0 1 Implementation considerations The MLP architecture is restricted to 2-n-1. The convergence condition is MES-goal=0.1&Epoch10

22、5. The pruning processes start with MLPs of 2-5-1 and stop at an architecture of 2-4-1. The relevant data used by and resulted from the pruning process are listed in Table 3 and Table 4. 35 TABLE 3. Data for 3 MLPs with 5 hidden neurons to realize the function MLP 2-5-1 EpochMSE (training) MSE (test

23、ing) Trained weights and bias (goal=0.1 & epoch=100000) SensitivityRelevance 1 44518 0.0999997 0.109217 2.8188 -8.1143 2.4420 -0.5450 2.5766 3.7037 1.4955 -2.9245 -2.5714 -3.7124 14.0153 -43.9907 28.0636 19.5486 -68.6432 bias=0 0.047599 0.035747 0.031518 0.027355 0.031513 0.6671 1.5725 0.8845 0.5348

24、 2.1632 2 51098 0.0999998 0.113006 1.4852 -3.8902 1.0692 0.1466 -1.0723 -0.1455 -7.0301 2.5695 -3.1382 -2.8094 23.9314 -19.1824 27.1565 14.9694 -91.6363 bias=0 0.037593 0.020170 0.020178 0.045504 0.032550 0.8997 0.3869 0.5480 0.6812 2.9828 3 33631 0.0999994 0.11369 3.2920 2.9094 -1.0067 3.4724 -7.05

25、78 2.4377 -3.2921 -2.9096 1.5303 -0.0606 45.7579 -30.0598 16.5386 -52.2874 -29.7040 bias=0 0.031498 0.039166 0.046210 0.031497 0.031715 1.4413 1.1773 0.7642 1.6469 0.9421 36 TABLE 4. Data for the 3 pruned MLPs with 4 hidden neurons to realize the function MLP 2-4-1 EpochMSE (training) MSE (testing)

26、Retrained weights and bias (goal=0.1 & epoch=100000) SensitivityRelevance 1 (Obtained by removing the 4th neuron from the MLP of 2-5-1) 22611 0.0999999 0.109085 2.8745 -6.8849 1.9844 0.0405 2.6295 3.8648 -2.6270 -3.8656 22.5649 -51.3458 33.0982 -74.4371 bias=5.5570 0.043173 0.028627 0.030708 0.03071

27、7 0.9742 1.4699 1.0164 2.2865 2 (Obtained by removing the 2nd neuron from the MLP of 2-5-1) 14457 0.0999998 0.112792 1.1511 -3.9352 -1.4080 -0.2348 -6.8277 2.3307 -3.2002 -2.9670 26.3668 31.5437 16.3482 -98.8089 bias=-12.4656 0.040841 0.029591 0.045979 0.031612 1.0768 0.9334 0.7517 3.1235 3 (Obtained by removing the 3rd neuron from the MLP of 2-5-1) 17501 0.0999997 0.111499 3.0386 3.7789 -1.3471 4.6670 -3.0386 -3.7789 3.5143 -0.7579 59.1526 -34.0215 -58.5949 -36.1761 bias=1.7474 0.029114 0.043097 0.029114 0.041372 1.7222 1.4662 1.7059 1.4967 37

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 其他


经营许可证编号:宁ICP备18001539号-1