#load relevant libraries library(plyr) #read in data change_data_all_t2<-read.csv("../analysis/change_data_all_t2.csv") #fit a series of models for comparison nullMod<-lm(percent.cover.change.t2~cover.t0,
         data=change_data_all_t2)
a<-lm(percent.cover.change.t2~ urchins+cover.t0,
         data=change_data_all_t2)
b<-lm(percent.cover.change.t2~ richness.t0+cover.t0,
         data=change_data_all_t2)
d<-lm(percent.cover.change.t2~richness.t0+urchins+cover.t0,
         data=change_data_all_t2)
e<-lm(percent.cover.change.t2~richness.t0*urchins+cover.t0,
         data=change_data_all_t2) #compare models using AICc aicW.mmi(list(nullMod, a,b,d,e))
I grew up, like so many ecologists, using point-and-click stats programs. I designed experiments for ANOVA, considered non-normality a sin, and was a slave to excel's bar graphs. Maybe sometimes I'd go wild and think on a logit scale. Eventually I got blessed into the brotherhood of SAS, loading and reloading giant sets of CDs every year and hoping that the University would keep our license updated. As a final note to those experimentalists who are coming to R from the world of jmp/sas/spss/etc, pretty much the first question I get once someone runs their first analysis is "What's up with my ANOVA giving me different results from my old software?" The answer has to do with different methods for calculating sums of squares. There are a lot of issues here, and they are worth reading up on and being aware of their meaning. As a good starting point, see this blog post and the links to mailing lists posts and journal papers in the comments. |