logo Hurry, Grab up to 30% discount on the entire course
Order Now logo

Ask This Question To Be Solved By Our ExpertsGet A+ Grade Solution Guaranteed

expert
Drake WeberPhilosophy
(5/5)

681 Answers

Hire Me
expert
Bhupal ObraoiLaw
(5/5)

603 Answers

Hire Me
expert
Joan DomettEngineering
(4/5)

799 Answers

Hire Me
expert
StatAnalytica ExpertResume writing
(5/5)

578 Answers

Hire Me
SPSS
(5/5)

suggestion in convergence validity, the factor loading of the indicator composite reliability

INSTRUCTIONS TO CANDIDATES
ANSWER ALL QUESTIONS

4.2 Exploratory factor analysis

Table II shows that all items contain a slight positive skew, with most responses being towards the higher end of the scale. The means for each item appear to be reasonable as each item is measured on a 5-point Likert scale. No values are above 5 or below 1. The standard deviations are all similar, suggesting that there are no outliers for any of the items. The 'Analysis N' shows the number of valid cases. There are no missing values because the entire sample included 874 participants.

As shown in Table III, communality defined as the common variance between 0 and 1 is the sum of the squared component loadings up to the number of components extracted. It represents the proportion of variance of each item that is explained by the factors. This is a calculation of the initial solution and then after extraction. The results indicated that each value of the item is closer to 1, suggesting that extracted factors explain more of the variance of an individual item. As shown in Table IV, the first five factors are meaningful as they have Eigenvalues > 1. Factors 1,2,3,4 and 5 explain 33.09%, 10.99%, 9.05%, 8.16% and 6.13% of the variance respectively – a cumulative total of 67.41% (total acceptable). Eigenvalues represent the total amount of variance that a given principal component can explain.

The KMO test in Table V displays the KMO statistic to the right of "Kaiser Meyer-Olkin Measure of Sampling Adequacy." As the value is above 0.8, the sample as a whole is adequate for factor analysis. Therefore, this can be tested to see whether the item measures what the researchers intended. Bartlett's test of sphericity tests the hypothesis that the correlation matrix is an identity matrix, which would indicate that variables are unrelated and therefore unsuitable for structure detection. The significance level (p < .05) indicated that factor analysis might be useful with the data.

4.3 Confirmatory factor analysis

 

In this study, the potential threat of common method bias was checked with Harman’s single-factor test, which assumes that if the risk of common method bias is substantial, when a single latent factor will account for more than 50% of the total variance of the measures (Min et al., 2016). The result of single-factor model accounted for 24.09% of the total variance. In short, common method bias was not a critical threat in this study. Moreover, by using confirmatory factor analysis to test the validity of the samples, the results confirmed convergence and discriminative validity. According to Hair et al. (2014) 's suggestion in convergence validity, the factor loading of the indicator, composite reliability (CR), and the average variance extracted (AVE) have to be considered to establish convergent validity. The absolute value of factor loading estimate should be at least 0.5 or more, and the best index value should be more than 0.7; AVE measuring the level of variance captured by a construct versus the level due to measurement error should be above 0.7, which is considered very good, whereas the level of 0.5 is acceptable. CR is a less biased estimate of reliability than Cronbach's Alpha; the acceptable value of CR is 0.7 and above. As shown in Table VI, the Cronbach's alpha values in evaluating the reliability of each sub-scale are above .80, which shows this survey is reliable (Hair et al., 2014). Table VII indicates that the proposed confirmatory factor analysis model indicator is better than the standard criteria. A model was considered a good fit to the data if RMSEA< .06, GFI >.9, AGFI >.9, NFI >.9, CFI > .95 (Hooper et al., 2008). The model fit indices demonstrated a good fit to the data. As can be seen from Table VIII, the factor loadings of each item are all above 0.65, AVE values are above 0.5, and CR values are above 0.8, which are acceptable. Table IX shows that discriminant validity was further confirmed by applying the Fornell and Larcker (1981) criterion. Good discriminant validity was confirmed by the higher square root of AVE than correlations between constructs as shown on the diagonal line, 0.720, 0.744, 0.797.

(5/5)
Attachments:

Related Questions

. The fundamental operations of create, read, update, and delete (CRUD) in either Python or Java

CS 340 Milestone One Guidelines and Rubric  Overview: For this assignment, you will implement the fundamental operations of create, read, update,

. Develop a program to emulate a purchase transaction at a retail store. This  program will have two classes, a LineItem class and a Transaction class

Retail Transaction Programming Project  Project Requirements:  Develop a program to emulate a purchase transaction at a retail store. This

. The following program contains five errors. Identify the errors and fix them

7COM1028   Secure Systems Programming   Referral Coursework: Secure

. Accepts the following from a user: Item Name Item Quantity Item Price Allows the user to create a file to store the sales receipt contents

Create a GUI program that:Accepts the following from a user:Item NameItem QuantityItem PriceAllows the user to create a file to store the sales receip

. The final project will encompass developing a web service using a software stack and implementing an industry-standard interface. Regardless of whether you choose to pursue application development goals as a pure developer or as a software engineer

CS 340 Final Project Guidelines and Rubric  Overview The final project will encompass developing a web service using a software stack and impleme