top of page

Need Machine Learning Project Proposal?

Codersarts offer project assistance in any type of machine learning task like provide machine learning project proposal, code implementations of machine leaning projects, proposal, research paper implementation and 1:1 Live project-based mentorship,


Reach out to us directly via email-id contact@codersarts.com and our email team will revert back promptly.


Let's see what is elements and format of successful project proposal in machine learning.



Proposal:

Deep Neural Networks (DNNs) are used in a variety of applications, such as object recognition in images and acoustic processing for speech recognition. There is significant motivation to use large training sets, as performance depends highly on input size. Banko and Brill found "that simple, classical models can outperform newer, more complex models, just because the simple models can be tractably learnt using orders of magnitude more input data." However, training DNNs is a time intensive process, so the size of the training set is often limited by resources. In, Raina et al., noted that "parameter learning can take weeks using a conventional implementation on a single CPU." Dean et al., have investigated ways to use distributed networks to reduce the training time, and thus allow the use of much larger training sets leading to much more effective networks. We believe that by exploited possible parallelism, we can improve the performance of a traditional DNN.



Methods:

We will implement a deep neural network on both a cluster of computers and a sequential machine, and compare performance on varying sizes of training data. We will rely heavily on the work done by Dean et al., [1] in our development of a distributed algorithm. In particular, they outline methods of distributing stochastic gradient descent (SGD) and limited-memory BFGS. We expect to implement and test only one of these procedures. A traditional, sequential deep network will be run as a control. Comparisions between the two will be based on both object recognition accuracy and the time taken to train the network.



Dataset:

We will use images from ImageNet ( http://www.image-net.org/ ) to train and test the neural networks. Image recognition is a common task for deep neural networks, and we expect the large data set available from ImageNet to highlight potential advantages of the distributed algorithm.




Milestone:

We will have a traditional DNN working by the milestone deadline. We will also have the parallel algorithm coded and be in the process of debugging and optimizing. We expect to be very close to running data on the parallel network.




Reference:

[1] Dean, Jeffrey, et al. Large Scale Distributed Deep Networks. NIPS, 2012.

[2] Banko, M., & Brill, E. (2001). Scaling to very very large corpora for natural language disambiguation. Annual Meeting of the Association for Computational Linguistics (pp. 26 - 33).

[3] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In ICML, 2009




Sources for background learning:

[1] Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen. UFLDL Tutorial, http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial

[2] Hinton, Geoffrey. Video Lecture: http://videolectures.net/jul09_hinton_deeplearn/



Credits:



Also this link is helpful for project proposal in machine learning - https://www.cs.dartmouth.edu/~lorenzo/teaching/cs174/Archive/Winter2013/Projects/projects.html


bottom of page