Size Effect: Report Statistics SPSS

Home / Size Effect: Report Statistics SPSS

 

Get your customised dissertation today!

The papers in our library are not prepared by us for our clients. They are given by the students who we have helped as a token of appreciation to extend our library so we can help many students by providing them with the best UK dissertation writing service. Order Now!

Report Statistics

  1. Effect Size

The effect size that is associated to the data is dependent on the type of test that is involved in analyzing the sample. In this particular case, the test ran were Bootstrapped correlation and randomized correlation. The effect size is considered as how big difference is made by intervention. It the effect size, both statistical and clinical significances are not great however small correlations around 0.20 require larger sample sizes, whereas medium correlations like 0.40 require medium sized sample size. For correlations that are larger and around about 0.60, smaller sample sizes are required. As for the given correlation, the value is about 0.60 therefore a sampler sample size would work effectively, therefore it is suggested that n=25.

  1. Desired Power

Usually powers have the ability to detect the difference between the mean scores and the magnitude of correlation however, if there is not enough power in the study, this certainly is not affected by the effect size, as there are many studies which are under powered. Once the effect size is determined, it is only then effective to analyze what the power level of the data is. It is possible to state a certain power level that is needed to be determined with the sample but that power level might not work effectively with the effect size of the sample. Therefore in order to get the desired 95% of power it is recommended to have 250 samples. However as the effect size is large d=0.60, therefore it is recommended to have smaller sample size i.e. 25.

  1. Interpretation

The figure 1 shows the results of drawing 1000 bootstrapped samples, each of n = 50, with replacement from the original data set. As you can see, the sampling distribution of r is positively skewed, as we would expect. The 95% CI are given as .548 and .792. Those are fairly wide intervals, but n = 50, which is not very large for setting confidence limits. In addition, confidence limits have an unpleasant habit of generally being larger than we would like. Notice that the limits do not include 0.0, which confirms that the correlation is significant, using a test that does not rely on bivariate normality of the data. By the nature of the variables used in this study, it is reasonable to expect that an assumption of bivariate normality is not terribly unreasonable.

In the figure 2 you can see that the correlation obtained by Katz et al. (1990) on the original data was .686. You can also see that the sampling distribution of r under randomization is symmetrical around 0.0, and that 0 of the 10,000 randomizations exceeded +.686. This gives us a probability under the null of .000, which will certainly allow us to reject the null hypothesis. This is a two-tailed test, and, because the distribution is symmetric for p = 0, you will not go far wrong if you cut the probability in half for a one-tailed test.

 

Bibliography

Frank McCown (2008), Producing Simple Graphs with R, web page, Computer Science Department, Harding University, Searcy, AR, USA

Hadley Wickham (2009), ggplot2: Elegant Graphics for Data Analysis (Use R),Springer; 2nd Printing

Tom Tullis and Bill Albert (2008), Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics, Morgan-Kaufmann.

 

 

 

Signup for free Report *

Fill this form below and avail free report of your order!




Get the best dissertation today. Do not settle for anything less than an A Order Now!