IPMA analysis to determine what latent variables and indicators are important to ‘influence’ dependent latent variable.the structural model has a lot af relations (we have several models designed a priori).the measurement model (wich has already been validated in a Delphi study).I want to calculate the required sample size to validate a model: But since it is a pilot study, I still think there are some interesting findings that are worth looking into with a larger sample size (the full scale study). I attribute the higher alpha error probability to the small sample size (n=98). However, the predictor displaying a f square value of around 0.10 still displays insignificant t and p values. Applying these adjusted specifications, the three predictors showing the largest f square values now have acceptable t values and p values (t values above 1.291 and p values below 0.20). Going back to the t statistics generated by the bootstrapping procedure. On the other hand, there is a 1% probability that I will fail to detect an effect as low as f square=0.13, if the effect is indeed present. So, there is a 20% probability that I will erroneously reject the null hypothesis (=no effect) for effects as low as f square=0.13. Then my results makes more sense, compared to the bootstrapping results in SmartPLS: Consequently, if I change my settings in G*power to, say: In general, 0.05/0.80 (Alpha err prob/1-Beta err prob) are used as conventional criteria in the social sciences. At least, that's my understanding of power analysis. If you accept a more liberal boundary for the one you should take on a more conservative approach for the other. Okay, after sleeping on it I realized a possible way to go about the problem described above.Ĭommonly, the relation between type 1/alpha errors ("false positives") and type 2/beta errors ("false negatives") is viewed as a trade-off. The one largest effect (f square = 0.27) displays a t-statistic of 1.563 and a p-value of 0.119.Ĭan anybody tell me how to interpret these differences between G*power and SmartPLS? Is the t statistic (in relation to the critical t value) more "important" than the G*power calculation of the power of the effect sizes? However, when I bootstrap the f square values in SmartPLS (500 subsamples), the t statistics are not even close to the critical t boundary. Based on the previous G*power calculation, I interpret these results as statistically significant and with sufficient statistical power based on the criteria I've specified. Running the PLS algorithm in SmartPLS v.3.2.6 results in four paths with f square values above 0.082 (namely, 0.27, 0.25, 0.23, and 0.10). Type of power analysis: Sensitivity: Compute required effect size - given alpha, power, and sample size Statistical test: Linear multiple regression: Fixed model, single regression coefficient I use the following settings in G*Power 3.1.9.2: I am interested in the statistical power of the predictors' effects on the dependent variable. My dataset has 98 respondents and I have 7 constructs predicting one specific dependent variable. I have a problem relating to the statistical power and critical t boundaries for f square values (effect size). Great to see that there is a thread on G*power here already. With five predictors, this would require you to have a sample size of 248 for a power of 95% to find the 0.2 coefficient significant. If you have an expectation for the standardized coefficient of 0.2 and an overall R² of 0.25, you would get an effect size f² of 0.053. It would require a standardized coefficient of roughly 0.275 if your overall R² is at about 0.5 and a standardized coefficient of 0.336 if your overall R² is at 0.25. Unfortunately, this effect size is not very straightforward to determine. It is the contribution to the R² by the predictor (R²inlcuded - R²excluded) / (1 - R²included). There you have to set the effect size f², which is the same effect size that is reported in SmartPLS under f². Hence, you might want to choose the "Linear multiple regression: Fixed model, single regression coefficient" as the method in G*Power. Many researchers are more interested in the significance of single effects instead of the variance explained by the overall regression equation. Hence, with 138 cases you will find a model with an R² of 0.15 significant at 95% of the time. It assesses the significance of the R² at the given effect size level (e.g., effects size 0.15 = R²). The test above is for the F test of a regression. The questions is, which test is the one you are interested in.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |