Question 1
Using techniques covered in class, which of the variables look like good candidates to help separate defaulters from non-defaulters? For the two you consider good choices, include brief output supporting this (e.g., one chart per variable), with one brief explanation for why these variables may be good choices. Question 2
This dataset is very unbalanced. (Look at the distribution for ‘default’ to see this.) To see what happens when we use an unbalanced dataset to build a model, we will first build a decision tree from the original data. Use between 70% and 80% of the data (your choice) for your training set, 10% for the validation set, and the remainder for the test set.
Include just your tree and your confusion matrices in your report.
Very briefly: How well does this model do? (You don’t need to perform any calculations—you should be able to see very easily what is happening.) Why does this happen? (Looking at your tree may help you understand this.)
Question 3
Now, build a decision tree from the same dataset, but balancing the data using weighting, according to the rules that we have discussed. Show how you determined what size weights you used to balance your set.
Use between 70% and 80% of the data (your choice) for your training set, 10% for the validation set, and the remainder for the test set.
Include in your answer the tree diagram, the ‘split history’ diagram showing how the optimum size was obtained, and the confusion matrices (training, test, and validation). For only the test set confusion matrix:
• If you are using JMP Pro 14, you will need to correct for the weighting process, to determine the performance for the original, unbalanced distribution. Show your calculations. (If you need help with this, it is covered in the exercises document for the textbook.)
o If you are using JMP Pro 16, it seems to automatically correct for the weighting process. You should double-check, however! If there are similar numbers of 1 and 0 instances in the confusion matrix, then you are seeing ‘weighted’ results, and you will need to ‘reverse’ the weighting. If you are seeing many more 0 instances than 1s, then the results have already been corrected.
• After doing so, calculate the correct classification rates, and interpret them (including both precision and recall).
Interpret your results. Is this classifier doing a good job? Be careful to think of the nature of the data and the business problem.
Question 4
Consider your decision tree. Can you make general statements about who is likely to default, based on the rules in your tree? (Don’t simply restate every rule! Consider whether there are general insights to be gained.) Question 5
In the data file provided, there are 25 rows at the end with no label. Consider this data to be from new or potential customers, for which we want to make predictions. (That’s one of the key purposes of our work, after all!)
Of course, with a decision tree, we could manually make a prediction for each new customer, but that is: a) slow; b) a lot of work!; c) error-prone. Moreover, for many of the models we will see later in the course, it is much more cumbersome to make predictions manually. We would prefer to use the tool itself.
CS 340 Milestone One Guidelines and Rubric Overview: For this assignment, you will implement the fundamental operations of create, read, update,
Retail Transaction Programming Project Project Requirements: Develop a program to emulate a purchase transaction at a retail store. This
7COM1028 Secure Systems Programming Referral Coursework: Secure
Create a GUI program that:Accepts the following from a user:Item NameItem QuantityItem PriceAllows the user to create a file to store the sales receip
CS 340 Final Project Guidelines and Rubric Overview The final project will encompass developing a web service using a software stack and impleme