If you want to make money online : Register now

Difference between SNN RL and DNN RL?

, , No Comments
Problem Detail: 

In Reinfrocement Learning (RL) in Neural Networks (NNs), I've seen two approaches to Q-learning.

The first is to tile the state space with basis functions using Spiking Neural Networks (SNN) to represent reward. This approach is used in "A neural reinforcement learning model for tasks with unknown time delays" by Daniel Rasmussen and is expanded upon in "A neural model of hierarchical reinforcement learning" by the same author.

The second is to use Deep Neural Networks (DNN) to map the state space to the reward space, such as the various publications used by Deep Mind.

From what I've read so far, I believe the differences between these two approaches are as follows:

  • SNNs take less long to train
  • SNNs can be deployed on existing neuromorphic hardware
  • SNNs are more easily extended into continuous cases
  • SNNs do more inference between states
  • DNNs can map larger and higher dimensional state spaces
  • DNNs require less knowledge of the task for the programmer

Is this analysis accurate? Are there other differences that I've missed?

Asked By : Seanny123

Answered By : Seanny123

Daniel Rasmussen replied via email:

  • SNNs take less long to train

This really depends on the training method, and the implementation. If you just implemented an abstract DNN approach to solve the same task as is being solved in those NEF RL papers, it would be a pretty simple network and would train quite fast (almost certainly faster than with the simple PES rule). On the other hand, if you recreated the internal structure of the SNN model, and trained it using a DNN approach (e.g., using something like nengo_deeplearning), I suspect that that would never converge to something useful. So overall it's hard to say something useful to compare them in this sense, but I think if you had to commit to something, the DNN training methods are probably more efficient than SNNs.

  • SNNs can be deployed on existing neuromorphic hardware

Yep

  • SNNs are more easily extended into continuous cases

I'm not sure if this is true. It is true that there aren't a lot of examples of people doing continuous state processing with DNNs, but that's also true of SNNs. I'm not sure it's any easier or harder for DNNs than SNNs (it's just harder than the discrete case in both paradigms).

  • SNNs do more inference between states

Again, don't think this is true in general, it'll all depend on how things are implemented.

  • DNNs can map larger and higher dimensional state spaces

Also depends on the implementation, but it is true that DNNs tend to be more computationally efficient than SNNs (because they're not incorporating all those biological features).

  • DNNs require less knowledge of the task for the programmer

Not categorically true, but probably true in most cases.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/57464

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback