Kun Cheng
Followers: 0 Following: 0
Statistics
7 Questions
0 Answers
RANK
223.091
of 295.467
REPUTATION
0
CONTRIBUTIONS
7 Questions
0 Answers
ANSWER ACCEPTANCE
85.71%
VOTES RECEIVED
0
RANK
of 20.234
REPUTATION
N/A
AVERAGE RATING
0.00
CONTRIBUTIONS
0 Files
DOWNLOADS
0
ALL TIME DOWNLOADS
0
RANK
of 153.912
CONTRIBUTIONS
0 Problems
0 Solutions
SCORE
0
NUMBER OF BADGES
0
CONTRIBUTIONS
0 Posts
CONTRIBUTIONS
0 Public Channels
AVERAGE RATING
CONTRIBUTIONS
0 Highlights
AVERAGE NO. OF LIKES
Feeds
Question
How to generate edge-information from a pde mesh?
Hi there, My task is generating a list of all edges in mesh. (ps. I mean not edges of geometry, i saw some function there, whic...
ongeveer 2 maanden ago | 1 answer | 0
1
answerQuestion
Why sim() can not read pretrained dqn-agent ?
Hi, I have trained an agent and want to simulate it by sim(). But there is an error: Error using rl.env.AbstractEnv/sim Inva...
11 maanden ago | 1 answer | 0
1
answerQuestion
why AC-agent converged to minimal ?
Hello everyone! I trained an AC-Agent. But agent converged to policy that gives minimal reward. I'm not sure if the problem is ...
ongeveer een jaar ago | 1 answer | 0
1
answerQuestion
Question to convergence of q0 and average reward
Hi guys, I am training a ddqn-agent. Average Reward can get converged about 1000 episodes. but q0 value need 3000 episodes more...
meer dan een jaar ago | 1 answer | 0
1
answerQuestion
question about calculate loss value of dqn agent reinforcement learning
hello everyone, I want to plot curve of loss value during training. It is not available in dqn training option directly as i kn...
meer dan een jaar ago | 0 answers | 0
0
answersQuestion
why can not output optimal solution when validate agent?
Hello everyone, Topic: Reinforcement Learning, DQN Agent. I have trained an agent with my dataset (total 28 training data) the...
meer dan een jaar ago | 1 answer | 0
1
answerQuestion
why agent failed to get accelerated after training?
Hi, I trained an pre-trained agent in the same environment. I expect that, model should converge faster but it did not happen. ...
meer dan een jaar ago | 1 answer | 0