logo Hurry, Grab up to 30% discount on the entire course
Order Now logo

Ask This Question To Be Solved By Our ExpertsGet A+ Grade Solution Guaranteed

expert
Connorr EvansEngineering
(5/5)

769 Answers

Hire Me
expert
Victor BarbeauuLaw
(5/5)

765 Answers

Hire Me
expert
Winifred ScottResume writing
(5/5)

546 Answers

Hire Me
expert
Willard BoiceGeneral article writing
(5/5)

609 Answers

Hire Me
Python Programming

On completing this coursework, you should be able to implement and understand advanced neural network techniques from scratch for solving real world AI problems.

INSTRUCTIONS TO CANDIDATES
ANSWER ALL QUESTIONS

Mathematics and Programming of AI Coursework

Introduction

 

On completing this coursework, you should be able to implement and understand advanced neural network techniques from scratch for solving real world AI problems. This coursework builds on the material covered in the tutorial Jupyter notebooks and lecture slides. You should complete the first four tutorial notebooks before starting the coursework. While implementing this coursework, you will report on classification experiments on object and emotion classification datasets. Your experiments should be designed such that they investigate your research questions. Python should be used for all implementations. Deliverables are the implementations to all tasks and a report.

 

Datasets

 

  • SVHN (http://ufldl.stanford.edu/housenumbers/) – a real-world image dataset for developing AI (object recognition) algorithms. It can be seen as similar in flavour to MNIST, but incorporates an order of magnitude more labelled data, over 600,000 digit images and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. Overview: 10 classes, 1 for each digit. Digit '1' has label 1, '9' has label 9 and '0' has label 10. 73257 digits for training, 26032 digits for testing, and 531131 additional, somewhat less difficult samples, to use as extra training data. Format: MNIST-like 32-by-32 images centred around a single character (many of the images do contain some distractors at the sides). You will need it for Task

 

  • CK+ (http://www.consortium.ri.cmu.edu/ckagree/) – the Extended Cohn-Kanade dataset, which is a public benchmark dataset for emotion recognition. Has a total of 5,876 labeled images of 123 individuals. Out of these images, choose the rational number of images for training and testing. Each image is labeled with one of seven emotions: happy, sad, angry, afraid, surprise, disgust, and contempt. Images in the CK+ dataset are all posed with similar backgrounds, mostly grayscale, and 640x490 pixels. You will need it for Task 5 and 6.

 

  • Note: You can choose your own emotion classification dataset, but choose wisely. You are not being marked on how good the results are. What matters is that you try something sensible and clearly describe the problem, method, what you did, and what the results were. Don't pick a dataset that is way too hard for your experiments. There are a number of options here http://www.face-rec.org/databases/. Be careful not to do foolish things like data snooping such as testing on your training data, including plots with unlabelled axes, using undefined symbols in Do sensible cross-checks like running your

 

models several times, leaving out small parts of your data, adding a few noisy points, etc. to make sure everything still works reasonably well. If you pick something you think is cool it will make the process of getting it to work more pleasurable and writing up your results less boring.

 

 

Computing

 

  • You are free to use any IDEs and code editors for Python. However, using a Jupyter Notebook does have the advantage of organising your experiments, code, and

  • You can use the lab workstations or work locally on your

  • You will need to give us explicit instructions how to run your code. For the Task 6, you will need to install either PyTorch or

  • CUDA environment - this is only needed for the Task 6. CUDA is installed on the lab workstations.

  • Some of your experiments may take significant compute time, so it is in your interest to start running experiments for this coursework as early as

 

What the coursework is about, tasks:

 

Task 1. Implementing linear and ReLu layers

 

For this task, you should implement the forward and backward passes for linear layers and ReLU activations.

 

Task 2. Implementing dropout

 

For this task, you should implement inverted dropout. Forward and backward passes should be implemented here.

 

Note: Since the test performance is critical, it is also preferable to leaving the forward pass unchanged at test time. Therefore, in most implementations inverted dropout is employed to overcome the undesirable property of the original dropout.

 

Task 3. Implementing softmax classifier

 

For this task, you should implement softmax loss and gradients. Explain numerical issues when calculating the softmax function.

 

Task 4. Implementing fully-connected NN

 

For this task, you should implement a fully-connected NN with arbitrary number of hidden layers, ReLu activation, softmax classification and optional dropout. This task is about reusing your implementations from Task 1 to Task 3. In addition, you will add a L2 regularizer. Report the parameters used (update rule, learning rate, decay, epochs, batch size) and include the plots in your report.

 

Task 5. Optimization of hyper-parameters

 

For this task, you should optimize the hyper-parameters of a fully-connected NN with your chosen emotion classification dataset. Here you should implement SGD with momentum. You need to select a performance measure which will be used to compare the performance of the different parameters on the validation set.

 

Hyper-parameter optimization steps:

 

  1. Choose a proper network architecture as starting point. Define a momentum and a learning

  2. Try a different stopping criterion or a different learning rate update

  3. Optimize the learning rate (disable regularization). Report how you found a good initial value for learning rate. Include the plot for training loss and report training and validation classification

  4. Apply dropout and see if there is any improvement in the validation

  5. Compare L2 performance with

  6. Finally, optimize the topology of the network (the number of hidden layers and the number of neurons in each hidden layer).

 

Task 6. Implementing your own deep NN

 

For this task, feel free to implement anything that can improve performance of your NN. You can implement additional layers, use different types of regularization, use an ensemble of models, batch normalization for training deep networks etc. Your modified network should be trained on your chosen emotion classification dataset. You can use either PyTorch or Tensorflow DL libraries. You can also use Keras as higher-level API with the above backend. You need to test the performance of your modified network with the optimal set of parameters on the test set and report the confusion matrix, classification rate and F1 measure per class. Compare the performance of your modified network with the feedforward NN in the previous task. Justify the results.

 

Report

 

Your final report should present the research questions that you investigate, and the experiments you designed and carried out. You should provide clear presentation of the methods (network architectures, learning schedules, etc.) that were used, and an outline of how they were implemented. You should present the results clearly and concisely and provide a discussion of the results, with conclusions related to the research questions. The conclusions section might propose some further work based on the results of this coursework.

Related Questions

. The fundamental operations of create, read, update, and delete (CRUD) in either Python or Java

CS 340 Milestone One Guidelines and Rubric  Overview: For this assignment, you will implement the fundamental operations of create, read, update,

. Develop a program to emulate a purchase transaction at a retail store. This  program will have two classes, a LineItem class and a Transaction class

Retail Transaction Programming Project  Project Requirements:  Develop a program to emulate a purchase transaction at a retail store. This

. The following program contains five errors. Identify the errors and fix them

7COM1028   Secure Systems Programming   Referral Coursework: Secure

. Accepts the following from a user: Item Name Item Quantity Item Price Allows the user to create a file to store the sales receipt contents

Create a GUI program that:Accepts the following from a user:Item NameItem QuantityItem PriceAllows the user to create a file to store the sales receip

. The final project will encompass developing a web service using a software stack and implementing an industry-standard interface. Regardless of whether you choose to pursue application development goals as a pure developer or as a software engineer

CS 340 Final Project Guidelines and Rubric  Overview The final project will encompass developing a web service using a software stack and impleme