# tl;dr

In which I reflect on my past-self’s bad R practices. Learnings: don’t use file.choose(), setwd(), nor attach(); structure your projects sensibly; write functions instead of copy-pasting code.

# A startling discovery

I dug up a time capsule from a decade ago. It contained poorly constructed R code.

Twist: it was me who wrote it.

Reading these scripts brought back the sweet nostalgia of running the vanilla R GUI on my precious white MacBook, using R as little more than an interactive calculator for ecological analyses.

What nuggets of pain did I unearth? Past-Matthew seemed to like:

This post is a belated learning moment for past-Matthew.

# 1. Falling foul of a file-finding fail

Can’t remember where a file is? Don’t want long file paths cluttering your scripts? Nevermind! Past-Matthew was using file.choose(), which opens your file explorer so you can navigate to the correct file.

df <- read.csv(file.choose())

But how can anyone reading your script (including you) know what file you actually read in? It’s not recorded in your script. You can’t re-run this code without that information.

Solutions:

• good project-folder structure that puts all the elements of your analysis — data, scripts, outputs — in one place so its portable and others can use it without having to change anything
• relative file paths that start from your project folder, so you can use computer-agnostic paths like data/cool-data.csv rather path/specific/to/my/machine/data/cool-data.csv

Tools:

• RStudio Projects encourage good folder structure and have the bonus of relative file paths, which start from the directory containing the .Rproj file.
• the {here} package by Kirill Müller also helps with relative file paths; here() finds a file based on the perceived ‘home’ for the project, or just where a manually-placed hidden .here file is placed with set_here()

## Justified arson

You may wonder why I haven’t mentioned setwd() as a solution here. It’s because Jenny Bryan will set your computer on fire.

But of course past-Matthew did this.1 He would use setwd() to point to where the project was housed locally:

setwd("/Users/Matthew//local/path/to/project/")
df <- read.csv("data/some_file.csv")

What’s the problem? The bit in setwd() is not reproducible — it’s the file location on one particular machine only!

# 2. Getting too attached

This problem begins with a question: how does R know where to look for a variable?

Here’s three ways to calculate Pokémon body mass index by reference to variables in a data set:

# Read Pokémon data from URL
"https://raw.githubusercontent.com/mwdray/datasets/master/pokemon_go_captures.csv",
))

# BMI calculation three ways
x <- mean(df$weight_kg / df$height_m ^ 2)  # dollar notation
y <- mean(df[["weight_kg"]] / df[["height_m"]] ^ 2)  # square brackets
z <- with(df, mean(weight_kg / height_m ^ 2))  # with() function

# All produce the same results?
all(x == y, y == z, x == z)
## [1] TRUE

So each line specifies the data frame object where R should look for the named variables. If you don’t provide this object, R will error:

mean(weight_kg / height_m ^ 2)
## Error in mean(weight_kg / height_m ^ 2) : object 'weight_kg' not found

R was searching for the weight_kg variable in a few places, starting with the global environment, but couldn’t find it. You can see the search path it takes:

search()
## [1] ".GlobalEnv"        "package:stats"     "package:graphics"
## [4] "package:grDevices" "package:utils"     "package:datasets"
## [7] "package:methods"   "Autoloads"         "package:base"

The data object isn’t in there, so that’s why it can’t find those variables.

Past-Matthew got around this by using attach(), which lets you add objects to the search path.

attach(df)
search()  # now 'df' is in the search path
##  [1] ".GlobalEnv"        "df"                "package:stats"
##  [4] "package:graphics"  "package:grDevices" "package:utils"
## [10] "package:base"

The following expression can now be calculated because R can find the variable names in the attached df object.

mean(weight_kg / height_m ^ 2)
## [1] 31.17416

So we never need to refer to the data frame name at all. Wow, how can that be bad?

Here’s one reason. Consider a data set with column names that match our original:

df2 <- df[species == "caterpie", ]
attach(df2)
## The following objects are masked from df:
##
##     charge_attack, combat_power, fast_attack, height_bin, height_m,
##     hit_points, species, weight_bin, weight_kg

You might be able to guess the problem: R will get variables from df2 first, since it was the most recently attached.

Bad news: this means the code we wrote earlier will get a different result.

mean(weight_kg / height_m ^ 2)
## [1] 31.64357

This has serious implications for reproducibility and the confidence you can have in your results.

See also the ‘search list shuffle’ danger of attach() referenced in Circle 8.1.35 of The R Inferno by Patrick Burns.

Past-Matthew was using this approach because he was taught with Mick Crawley’s R Book. Mick says attach() ‘makes the code easier to understand for beginners’ (page 18)2 — possibly because expressions end up looking less cluttered. But this only sets up beginners (like me) for problems later. In fact, Mick even says to ‘avoid using attach wherever possible’ in his book.

Pro tip: if you do ever use attach() (don’t), you’ll want to make sure you detach() your objects from the search path.

detach(df)
detach(df2)

# 3. Polluting the environment

Past-Matthew clearly executed different projects and scripts in the same running instance of R.

The potential for confusion and error is high in this scenario. Was the object results created from analysis1.R or analysis2.R? Maybe results is now out of date because the code has been updated.

I’m also certain that the content of past-Matthew’s workspace was being saved at the end of each session — the default behaviour — meaning all that trash would come back next time he fired up R.

There were also some strange defensive lines like the following, which pre-emptively unloads the {nlme} package because of a namespace conflict with {lme4}:

detach("package:nlme")  # conflicts with lme4

Unnecessary and odd. I assume this was because past-Matthew was never quite sure of the state of his current working environment.

These days I treat everything in my environment with suspicion and restart R regularly and rebuild objects from scratch. This means I can have confidence that my script does what I think it does and also stops interference from older objects that are clogging up my environment.

I also modified the default behaviour of RStudio to prevent my workspace being saved, which means I can start afresh when I open a project. To do this, untick ‘Restore .Rdata on startup’ and set ‘Save workspace to .RData on exit’ to ‘Never’ in Tools > Global Options > General > Basic > Workspace.

Read more about workflow in the R for Data Science book by Garrett Grolemund and Hadley Wickham.

# 4. There’s a function for that

Turns out past-Matthew repeated loads of code because functions looked too much like Real Programming and were therefore Quite Hard.

Here’s a simple example of code repetition that was pretty common in past-Matthew’s scripts:

# Subset the data and then get a mean value
sub_koffing <- subset(df, species == "koffing")
mean_koffing <- round(mean(sub_koffing[["weight_kg"]]), 2)

# Do it again for a different species
sub_paras <- subset(df, species == "paras")
mean_paras <- round(mean(sub_paras[["weight_kg"]]), 2)

# Do it again for a different species
sub_geodude <- subset(df, species == "geodude")
mean_geodude <- round(mean(sub_koffing[["weight_kg"]]), 2)

# Print results
mean_koffing; mean_paras; mean_geodude
## [1] 0.92
## [1] 5.39
## [1] 0.92

You know this is bad news; copy-pasting leads to mistakes. See how two of those outputs are suspiciously similar? Oops.3

(Note the use of semi-colons here as well. Past-Matthew seemed to like using these to print multiple results, but I don’t use these anymore and don’t see anyone else doing it.)

Functions let you write the meat of the code just once, eliminating the copy-paste error. You can then loop over the variables of interest to get your results.

The effort of learning to write your own functions is worth it to avoid the problems. See R for Data Science for more on this.

Here’s one way to tackle the code repetition above:

# Function to calcuate a rounded mean value for a given species
get_sp_mean <- function(
sp, data = df, var = "weight_kg", dp = 2
) {

sub_sp <- subset(data, species == sp)  # subset data
mean_sp <- round(mean(sub_sp[[var]]), dp)  # get mean
return(mean_sp)  # function will output the mean value

}

# Create a named vector to iterate over
species <- c("koffing", "paras", "geodude")
names(species) <- species  # make it a named vector

# Iterate over the vector to apply the function
purrr::map(species, get_sp_mean)
## $koffing ## [1] 0.92 ## ##$paras
## [1] 5.39
##
## \$geodude
## [1] 23.24

Friendship ended with code repetition. Now bespoke functions and {purrr} are my best friends.

# Reflections

I think it’s a good exercise to look back and critique your old code. What makes you cringe when you look back on it?

There’s no shame in writing code that does what you want it to do. I can see why past-Matthew did the things he did. But I’m also glad he stopped doing them.

See you in ten years to look back on the inevitably terrible code I’ve written in this blog.

Session info

## ─ Session info ───────────────────────────────────────────────────────────────
##  setting  value
##  version  R version 3.6.1 (2019-07-05)
##  os       macOS Sierra 10.12.6
##  system   x86_64, darwin15.6.0
##  ui       X11
##  language (EN)
##  collate  en_GB.UTF-8
##  ctype    en_GB.UTF-8
##  tz       Europe/London
##  date     2020-04-21
##
## ─ Packages ───────────────────────────────────────────────────────────────────
##  package     * version date       lib source
##  assertthat    0.2.1   2019-03-21 [1] CRAN (R 3.6.0)
##  blogdown      0.17    2019-11-13 [1] CRAN (R 3.6.0)
##  bookdown      0.18    2020-03-05 [1] CRAN (R 3.6.0)
##  cli           2.0.2   2020-02-28 [1] CRAN (R 3.6.0)
##  crayon        1.3.4   2017-09-16 [1] CRAN (R 3.6.0)
##  curl          4.3     2019-12-02 [1] CRAN (R 3.6.0)
##  digest        0.6.25  2020-02-23 [1] CRAN (R 3.6.0)
##  evaluate      0.14    2019-05-28 [1] CRAN (R 3.6.0)
##  fansi         0.4.1   2020-01-08 [1] CRAN (R 3.6.0)
##  glue          1.3.1   2019-03-12 [1] CRAN (R 3.6.0)
##  hms           0.5.3   2020-01-08 [1] CRAN (R 3.6.0)
##  htmltools     0.4.0   2019-10-04 [1] CRAN (R 3.6.0)
##  knitr         1.28    2020-02-06 [1] CRAN (R 3.6.0)
##  magrittr      1.5     2014-11-22 [1] CRAN (R 3.6.0)
##  pillar        1.4.3   2019-12-20 [1] CRAN (R 3.6.0)
##  pkgconfig     2.0.3   2019-09-22 [1] CRAN (R 3.6.0)
##  purrr         0.3.3   2019-10-18 [1] CRAN (R 3.6.0)
##  R6            2.4.1   2019-11-12 [1] CRAN (R 3.6.0)
##  Rcpp          1.0.3   2019-11-08 [1] CRAN (R 3.6.0)
##  readr         1.3.1   2018-12-21 [1] CRAN (R 3.6.0)
##  rlang         0.4.5   2020-03-01 [1] CRAN (R 3.6.0)
##  rmarkdown     2.1     2020-01-20 [1] CRAN (R 3.6.0)
##  sessioninfo   1.1.1   2018-11-05 [1] CRAN (R 3.6.0)
##  stringi       1.4.6   2020-02-17 [1] CRAN (R 3.6.1)
##  stringr       1.4.0   2019-02-10 [1] CRAN (R 3.6.0)
##  tibble        2.1.3   2019-06-06 [1] CRAN (R 3.6.0)
##  vctrs         0.2.4   2020-03-10 [1] CRAN (R 3.6.1)
##  withr         2.1.2   2018-03-15 [1] CRAN (R 3.6.0)
##  xfun          0.12    2020-01-13 [1] CRAN (R 3.6.0)
##  yaml          2.2.1   2020-02-01 [1] CRAN (R 3.6.0)
##
## [1] /Users/matt.dray/Library/R/3.6/library
## [2] /Library/Frameworks/R.framework/Versions/3.6/Resources/library

1. My darling white plastic MacBook would have melted horribly if set on fire.

2. To be fair, Mick has taught a lot of R classes in his time.

3. Especially because Geodude is made of rock and Koffing is basically just made of gas.