登入帳戶  | 訂單查詢  | 購物車/收銀台(0) | 在線留言板  | 付款方式  | 聯絡我們  | 運費計算  | 幫助中心 |  加入書簽
會員登入   新用戶註冊
HOME新書上架暢銷書架好書推介特價區會員書架精選月讀2024年度TOP分類閱讀雜誌 香港/國際用戶
最新/最熱/最齊全的簡體書網 品種:超過100萬種書,正品正价,放心網購,悭钱省心 送貨:速遞 / 物流,時效:出貨後2-4日

2025年03月出版新書

2025年02月出版新書

2025年01月出版新書

2024年12月出版新書

2024年11月出版新書

2024年10月出版新書

2024年09月出版新書

2024年08月出版新書

2024年07月出版新書

2024年06月出版新書

2024年05月出版新書

2024年04月出版新書

2024年03月出版新書

2024年02月出版新書

『簡體書』统计策略搜索强化学习方法及应用

書城自編碼: 3675062
分類: 簡體書→大陸圖書→計算機/網絡人工智能
作 者: 赵婷婷
國際書號(ISBN): 9787121419591
出版社: 电子工业出版社
出版日期: 2021-09-01

頁數/字數: /
書度/開本: 16开 釘裝: 平装

售價:NT$ 435

我要買

share:

** 我創建的書架 **
未登入.



新書推薦:
不止于判断:判断与决策学的发展史、方法学及判断理论
《 不止于判断:判断与决策学的发展史、方法学及判断理论 》

售價:NT$ 347.0
人才画像、测评、盘点、管理完全应用手册
《 人才画像、测评、盘点、管理完全应用手册 》

售價:NT$ 254.0
跳出猴子思维:如何成为不完美主义者(30天认知训练打破完美主义的困扰!实现从思维到行为的全面改变!)
《 跳出猴子思维:如何成为不完美主义者(30天认知训练打破完美主义的困扰!实现从思维到行为的全面改变!) 》

售價:NT$ 301.0
粤港澳大湾区创新能力与创新效率评价研究
《 粤港澳大湾区创新能力与创新效率评价研究 》

售價:NT$ 398.0
西方服饰史:从公元前3500年到21世纪(第7版,一部西方服饰百科图典。5500年时尚变迁史,装帧典雅,收藏珍品)
《 西方服饰史:从公元前3500年到21世纪(第7版,一部西方服饰百科图典。5500年时尚变迁史,装帧典雅,收藏珍品) 》

售價:NT$ 2030.0
仕途之外 士人与权力互动的政治史
《 仕途之外 士人与权力互动的政治史 》

售價:NT$ 305.0
经纬度系列丛书·帝国陨落:君士坦丁堡的40次围城
《 经纬度系列丛书·帝国陨落:君士坦丁堡的40次围城 》

售價:NT$ 347.0
浪客剑心:东京篇(全6册)
《 浪客剑心:东京篇(全6册) 》

售價:NT$ 1163.0

建議一齊購買:

+

NT$ 654
《阿里云天池大赛赛题解析——机器学习篇》
+

NT$ 458
《深度学习理论及实战(MATLAB版)》
+

NT$ 649
《工业级知识图谱:方法与实践》
+

NT$ 539
《零基础学机器学习》
+

NT$ 239
《智能设计:理论与方法》
+

NT$ 632
《机器视觉——使用HALCON描述与实现》
內容簡介:
智能体AlphaGo战胜人类围棋专家刷新了人类对人工智能的认识,也使得其核心技术强化学习受到学术界的广泛关注。本书正是在如此背景下,围绕作者多年从事强化学习理论及应用的研究内容及国内外关于强化学习的近动态等方面展开介绍,是为数不多的强化学习领域的专业著作。该著作侧重于基于直接策略搜索的强化学习方法,结合了统计学习的诸多方法对相关技术及方法进行分析、改进及应用。本书以一个全新的现代角度描述策略搜索强化学习算法。从不同的强化学习场景出发,讲述了强化学习在实际应用中所面临的诸多难题。针对不同场景,给定具体的策略搜索算法,分析算法中估计量和学习参数的统计特性,并对算法进行应用实例展示及定量比较。特别地,本书结合强化学习前沿技术将策略搜索算法应用到机器人控制及数字艺术渲染领域,给人以耳目一新的感觉。后根据作者长期研究经验,对强化学习的发展趋势进行了简要介绍和总结。本书取材经典、全面,概念清楚,推导严密,以期形成一个集基础理论、算法和应用为一体的完备知识体系。
關於作者:
赵婷婷,天津科技大学人工智能学院副教授,主要研究方向为人工智能、机器学习。中国计算机协会(CCF) 会员、YOCSEF 会员、中国人工智能学会会员、人工智能学会模式识别专委会委员,2017年获得天津市\131”创新型人才培养工程第二层次人选称号。
目錄
第1章 强化学习概述···························································································1
1.1 机器学习中的强化学习··········································································1
1.2 智能控制中的强化学习··········································································4
1.3 强化学习分支··························································································8
1.4 本书贡献·······························································································11
1.5 本书结构·······························································································12
参考文献········································································································14
第2章 相关研究及背景知识·············································································19
2.1 马尔可夫决策过程················································································19
2.2 基于值函数的策略学习算法·································································21
2.2.1 值函数·······················································································21
2.2.2 策略迭代和值迭代····································································23
2.2.3 Q-learning ··················································································25
2.2.4 基于小二乘法的策略迭代算法·············································27
2.2.5 基于值函数的深度强化学习方法·············································29
2.3 策略搜索算法························································································30
2.3.1 策略搜索算法建模····································································31
2.3.2 传统策略梯度算法(REINFORCE算法)······························32
2.3.3 自然策略梯度方法(Natural Policy Gradient)························33
2.3.4 期望化的策略搜索方法·····················································35
2.3.5 基于策略的深度强化学习方法·················································37
2.4 本章小结·······························································································38
参考文献········································································································39
第3章 策略梯度估计的分析与改进·································································42
3.1 研究背景·······························································································42
3.2 基于参数探索的策略梯度算法(PGPE算法)···································44
3.3 梯度估计方差分析················································································46
3.4 基于基线的算法改进及分析·························································48
3.4.1 基线的基本思想································································48
3.4.2 PGPE算法的基线······························································49
3.5 实验·······································································································51
3.5.1 示例···························································································51
3.5.2 倒立摆平衡问题········································································57
3.6 总结与讨论····························································································58
参考文献········································································································60
第4章 基于重要性采样的参数探索策略梯度算法··········································63
4.1 研究背景·······························································································63
4.2 异策略场景下的PGPE算法·································································64
4.2.1 重要性加权PGPE算法·····························································65
4.2.2 IW-PGPE算法通过基线减法减少方差····································66
4.3 实验结果·······························································································68
4.3.1 示例···························································································69
4.3.2 山地车任务················································································78
4.3.3 机器人仿真控制任务································································81
4.4 总结和讨论····························································································88
参考文献·····························

 

 

書城介紹  | 合作申請 | 索要書目  | 新手入門 | 聯絡方式  | 幫助中心 | 找書說明  | 送貨方式 | 付款方式 台灣用户 | 香港/海外用户
megBook.com.tw
Copyright (C) 2013 - 2025 (香港)大書城有限公司 All Rights Reserved.