Cheap and Secure Web Hosting Provider : See Now

Optimizing neural network on small training set

, , No Comments
Problem Detail: 

I'm in the process of optimizing my neural network. I'd like to optimize on a small training set (1000 rows) as opposed to my full training set (100K rows) for speed reasons.

Will the optimal hyper-parameters (i.e. my learning rate, dropout prob, regularization parameter, # of hidden units, etc...) for my small training set also be optimal for my large training set? In other words, which parameters can I optimize on my small training set, and which must I try to optimize on my large?

Thanks--

Asked By : sir_thursday
Answered By : D.W.

This is a bad idea. For many tasks, you'll likely get poor performance. For many machine learning tasks, having a lot of data is essential to getting good results.

Instead, I recommend you set yourself up with software and hardware that can train your network on the full training set efficiently: buy a fast GPU, use software that can use the GPU for training, use stochastic gradient descent with mini-batches and other standard techniques.

You'll likely need to optimize all of your hyper-parameters on the full training set. I don't think optimizing the hyper-parameters on a small training set is likely to work well. If there's any past research on similar machine learning tasks, you might look at what network architecture and hyper-parameters they used as use that as a starting point for your exploration.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/60347

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback