Hi all, 

I have a 800 feature model that will like to compress by reducing the number of features and of course trying to maintain the accuracy levels as much as I can. 

 Is there an algorithm or a technique that I can implement over the features and found the ones that have the highest level of influence over the classes before training a model? 

 Have tough of training the model over 50 features each time and extract the best ones from every run but this process require a lot of computational resources and was wondering if there's a better way?

Author