新書推薦:
《
保守主义:为传统而战
》
售價:NT$
704.0
《
不同境遇的36岁:无尽与有限+人生半熟
》
售價:NT$
510.0
《
小时光 油画棒慢绘零基础教程
》
售價:NT$
403.0
《
可控性混乱
》
售價:NT$
301.0
《
篡魏:司马懿和他的夺权同盟
》
售價:NT$
296.0
《
狂飙年代:18世纪俄国的新文化和旧文化(第三卷)
》
售價:NT$
806.0
《
协和专家大医说:医话肿瘤
》
售價:NT$
500.0
《
潜水指南 全彩图解第4版
》
售價:NT$
602.0
|
內容簡介: |
智能体AlphaGo战胜人类围棋专家刷新了人类对人工智能的认识,也使得其核心技术强化学习受到学术界的广泛关注。本书正是在如此背景下,围绕作者多年从事强化学习理论及应用的研究内容及国内外关于强化学习的近动态等方面展开介绍,是为数不多的强化学习领域的专业著作。该著作侧重于基于直接策略搜索的强化学习方法,结合了统计学习的诸多方法对相关技术及方法进行分析、改进及应用。本书以一个全新的现代角度描述策略搜索强化学习算法。从不同的强化学习场景出发,讲述了强化学习在实际应用中所面临的诸多难题。针对不同场景,给定具体的策略搜索算法,分析算法中估计量和学习参数的统计特性,并对算法进行应用实例展示及定量比较。特别地,本书结合强化学习前沿技术将策略搜索算法应用到机器人控制及数字艺术渲染领域,给人以耳目一新的感觉。后根据作者长期研究经验,对强化学习的发展趋势进行了简要介绍和总结。本书取材经典、全面,概念清楚,推导严密,以期形成一个集基础理论、算法和应用为一体的完备知识体系。
|
關於作者: |
赵婷婷,天津科技大学人工智能学院副教授,主要研究方向为人工智能、机器学习。中国计算机协会(CCF) 会员、YOCSEF 会员、中国人工智能学会会员、人工智能学会模式识别专委会委员,2017年获得天津市\131”创新型人才培养工程第二层次人选称号。
|
目錄:
|
第1章 强化学习概述···························································································1
1.1 机器学习中的强化学习··········································································1
1.2 智能控制中的强化学习··········································································4
1.3 强化学习分支··························································································8
1.4 本书贡献·······························································································11
1.5 本书结构·······························································································12
参考文献········································································································14
第2章 相关研究及背景知识·············································································19
2.1 马尔可夫决策过程················································································19
2.2 基于值函数的策略学习算法·································································21
2.2.1 值函数·······················································································21
2.2.2 策略迭代和值迭代····································································23
2.2.3 Q-learning ··················································································25
2.2.4 基于小二乘法的策略迭代算法·············································27
2.2.5 基于值函数的深度强化学习方法·············································29
2.3 策略搜索算法························································································30
2.3.1 策略搜索算法建模····································································31
2.3.2 传统策略梯度算法(REINFORCE算法)······························32
2.3.3 自然策略梯度方法(Natural Policy Gradient)························33
2.3.4 期望化的策略搜索方法·····················································35
2.3.5 基于策略的深度强化学习方法·················································37
2.4 本章小结·······························································································38
参考文献········································································································39
第3章 策略梯度估计的分析与改进·································································42
3.1 研究背景·······························································································42
3.2 基于参数探索的策略梯度算法(PGPE算法)···································44
3.3 梯度估计方差分析················································································46
3.4 基于基线的算法改进及分析·························································48
3.4.1 基线的基本思想································································48
3.4.2 PGPE算法的基线······························································49
3.5 实验·······································································································51
3.5.1 示例···························································································51
3.5.2 倒立摆平衡问题········································································57
3.6 总结与讨论····························································································58
参考文献········································································································60
第4章 基于重要性采样的参数探索策略梯度算法··········································63
4.1 研究背景·······························································································63
4.2 异策略场景下的PGPE算法·································································64
4.2.1 重要性加权PGPE算法·····························································65
4.2.2 IW-PGPE算法通过基线减法减少方差····································66
4.3 实验结果·······························································································68
4.3.1 示例···························································································69
4.3.2 山地车任务················································································78
4.3.3 机器人仿真控制任务································································81
4.4 总结和讨论····························································································88
参考文献·····························
|
|