일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- GPIO
- Interrupt
- 스위치
- LED
- 신경망
- Network layer
- 딥러닝
- 모두를 위한 딥러닝
- Router
- 신경망 첫걸음
- 모두를 위한 딥러닝]
- function call
- Class Activation Map
- 디바이스 드라이버
- 운영체제
- 리눅스
- Linux
- RDT
- 인터럽트
- 펌웨어
- 텐서플로우
- demultiplexing
- Transport layer
- Switch
- Generalized forward
- 3분 딥러닝
- file descriptors
- LED 제어
- TensorFlow
- 밑바닥부터 시작하는 딥러닝
Archives
- Today
- Total
건조젤리의 저장소
4-2. Q-learning 구현 (table) 본문
김성훈 교수님의 강의내용을 정리한 내용입니다.
출처 : http://hunkim.github.io/ml/
모두를 위한 머신러닝/딥러닝 강의
hunkim.github.io
지난 강의에서 설명한 알고리즘을 구현해보자!

그대로 구현하면 된다!
* env.action_sapce.sample(): 랜덤한 행동을 한다.
노이즈 값을 추가하는 방법의 구현

dis는 1보다 작은 값으로 설정한다.



결과가 잘 나오는 것을 확인!

e-greedy방법으로 확인해 보자!

이전의 노이즈 값 추가 방법보다 더 다양한 길을 찾아냈다.
구현 코드 (환경: ubuntu:16.04 python 3.6)
Exploit vs Exploration 방법
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
import gym
import numpy as np
import matplotlib.pyplot as plt
from gym.envs.registration import register
register(
id='FrozenLake-v3',
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name': '4x4',
'is_slippery': False}
)
env = gym.make('FrozenLake-v3')
# Initialize table with all zeros
Q = np.zeros([env.observation_space.n, env.action_space.n])
# Discount factor
dis = .99
num_episodes = 2000
# create lists to contain total rewards and steps per episode
rList = []
for i in range(num_episodes):
# Reset environment and get first new observation
state = env.reset()
rAll = 0
done = False
# The Q-Table learning algorithm
while not done:
# Choose an action by greedily (with noise) picking from Q table
action = np.argmax(Q[state, :] + np.random.randn(1,
env.action_space.n) / (i + 1))
# Get new state and reward from environment
new_state, reward, done, _ = env.step(action)
# Update Q-Table with new knowledge using decay rate
Q[state, action] = reward + dis * np.max(Q[new_state, :])
rAll += reward
state = new_state
rList.append(rAll)
print("Success rate: " + str(sum(rList) / num_episodes))
print("Final Q-Table Values")
print(Q)
plt.bar(range(len(rList)), rList, color="blue")
plt.show()
|
cs |
E-greedy 방법
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
|
import gym
import numpy as np
import matplotlib.pyplot as plt
from gym.envs.registration import register
import random as pr
register(
id='FrozenLake-v3',
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name': '4x4',
'is_slippery': False}
)
env = gym.make('FrozenLake-v3')
# Initialize table with all zeros
Q = np.zeros([env.observation_space.n, env.action_space.n])
# Set learning parameters
dis = .99
num_episodes = 2000
# create lists to contain total rewards and steps per episode
rList = []
for i in range(num_episodes):
# Reset environment and get first new observation
state = env.reset()
rAll = 0
done = False
e = 1. / ((i // 100) + 1) # Python2&3
# The Q-Table learning algorithm
while not done:
# Choose an action by e greedy
if np.random.rand(1) < e:
action = env.action_space.sample()
else:
action = np.argmax(Q[state, :])
# Get new state and reward from environment
new_state, reward, done, _ = env.step(action)
# Update Q-Table with new knowledge using learning rate
Q[state, action] = reward + dis * np.max(Q[new_state, :])
rAll += reward
state = new_state
rList.append(rAll)
print("Success rate: " + str(sum(rList) / num_episodes))
print("Final Q-Table Values")
print(Q)
plt.bar(range(len(rList)), rList, color="blue")
plt.show()
|
cs |
'공부 기록 > 모두를 위한 딥러닝 (RL)' 카테고리의 다른 글
5-2. Windy Frozen Lake 구현 (0) | 2019.11.19 |
---|---|
5-1. Windy Frozen Lake (Non-deterministic world) (0) | 2019.11.19 |
4-1. Q-learning (0) | 2019.11.19 |
3-2. Dummy Q-learning 구현 (0) | 2019.11.18 |
3-1. Dummy Q-learning (0) | 2019.11.18 |