cuda-convnet2 is a fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks that can model arbitrary layer connectivity and network depth, any directed acyclic graph of layers will do it required fermi-generation GPU (GTX 4xx, GTX 5xx, or Tesla equivalent).
It is a slight evolution of the code Alex Krizhevsky used to win the Imagenet challenge back in 2012, so it is useful in a historical sense and worth looking at out of curiosity. It has since inspired many many libraries for training ConvNets on the GPU.
What do you dislike?
It hasn't been updated for at least four years, and as such does not include modern layers, or features such as automatic differentiation.
Recommendations to others considering the product
It is the OG of deep learning, but Caffe quickly became the library of choice, before itself being replaced by Tensorflow and Pytorch.
What business problems are you solving with the product? What benefits have you realized?
I ran cuda-convnet back in the day to train convolutional neural networks for image recognition. The main advantage was the ability to run super fast on the GPU.
* We monitor all cuda-convnet2 reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. Validated reviews require the user to submit a screenshot of the product containing their user ID, in order to verify a user is an actual user of the product.