My sample size was very small (n=8) and i don't think it was powerful enough to detect a significant difference. The difference between the means was 15%. I have read that performing a retrospective (observed) power analysis is not good form, however, I want to comment on the fact that with more participants I may have had significant results. Any help would be much appreciated as i am not a statistician! Many thanks, M
-
Hopefully you understand that statistical significance does not necessarily equate to biological significance.
Now that you have a good feel for the variability in your data, a retrospective power analysis is fine AS LONG AS you use the results to inform you what kind of sample sizes you'll need in YOUR FOLLOWUP EXPT in order to resolve a significant difference. Using it merely to speculate on "what might have been" is not only not good form, but if you try to publish using such a strategy, you will get your manuscript returned to you by the associate editor with a polite cover letter saying, "thanks, but no thanks." But power analysis can be very useful in helping you make the decision of whether to repeat the study using a larger sample size. For example, if it indicates that an n of 10 would be sufficient, all you need to do is a few more and maybe you'll get to significance. However, if it indicates you'd need an n of 30 or 50 or 100, that may be an indication to you that further pursuit of this issue is not prudent--for most things in biology, a 10-20% difference is pretty much expected normal biological variation, so with your present experimental model, you may be skating on the edge of biological irrelevance here...that's not the same as saying "you've failed," rather, it means that you've learned what doesn't work. Even experiments that don't work teach us something.
Now that you have a good feel for the variability in your data, a retrospective power analysis is fine AS LONG AS you use the results to inform you what kind of sample sizes you'll need in YOUR FOLLOWUP EXPT in order to resolve a significant difference. Using it merely to speculate on "what might have been" is not only not good form, but if you try to publish using such a strategy, you will get your manuscript returned to you by the associate editor with a polite cover letter saying, "thanks, but no thanks." But power analysis can be very useful in helping you make the decision of whether to repeat the study using a larger sample size. For example, if it indicates that an n of 10 would be sufficient, all you need to do is a few more and maybe you'll get to significance. However, if it indicates you'd need an n of 30 or 50 or 100, that may be an indication to you that further pursuit of this issue is not prudent--for most things in biology, a 10-20% difference is pretty much expected normal biological variation, so with your present experimental model, you may be skating on the edge of biological irrelevance here...that's not the same as saying "you've failed," rather, it means that you've learned what doesn't work. Even experiments that don't work teach us something.
-
Eiej