You are currently browsing hadleywickham’s articles.

We are excited to announce that submissions for lightning talks at rstudio::conf are now open! Lightning talks are short (5 minute) high energy presentations that give you the chance to talk about an interesting project that you’ve tackled with R. Short talks, or demos of your R code, R packages, and shiny apps are great options. See some of the great lightning talks from the Shiny Developer Conference (scroll down to user talks).

Submit your lightning talk proposal here!

Submissions are due December 1, 2016. We’ll announce the accepted talks on December 15. (You must be a registered attendee of rstudio::conf to present a lightning talk.)


haven 1.0.0 is a major release, and indicates that haven is now largely feature complete and has been tested on many real world datasets. There are four major changes in this version of haven:

  1. Improvements to the underlying ReadStat library
  2. Better handling of “special” missing values
  3. Improved date/time support
  4. Support for other file metadata.

There were also a whole bunch of other minor improvements and bug fixes: you can see the complete list in the release notes.


Haven builds on top of the ReadStat C library by Evan Miller. This version of haven includes many improvements thanks to Evan’s hard work on ReadStat:

  • Can read binary/Ross compressed SAS files.
  • Support for reading and writing Stata 14 data files.
  • New write_sas() allows you to write data frames out to sas7bdat files. This is still somewhat experimental.
  • read_por() now actually works.
  • Many other bug fixes and minor improvements.

Missing values

haven 1.0.0 includes comprehensive support for the “special” types of missing values found in SAS, SPSS, and Stata. All three tools provide a global “system missing value”, displayed as .. This is roughly equivalent to R’s NA, although neither Stata nor SAS propagate missingness in numeric comparisons (SAS treats the missing value as the smallest possible number and Stata treats it as the largest possible number).

Each tool also provides a mechanism for recording multiple types of missingness:

  • Stata has “extended” missing values, .A through .Z.
  • SAS has “special” missing values, .A through .Z plus ._.
  • SPSS has per-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.

Stata and SAS only support tagged missing values for numeric columns. SPSS supports up to three distinct values for character columns. Generally, operations involving a user-missing type return a system missing value.

Haven models these missing values in two different ways:

  • For SAS and Stata, haven provides tagged_na() which extend R’s regular NA to add a single character label.
  • For SPSS, haven provides labelled_spss() that also models user defined values and ranges.

Use zap_missing() if you just want to convert to R’s regular NAs.

You can get more details in the semantics vignette.


Support for date/times has substantially improved:

  • read_dta() now recognises “%d” and custom date types.
  • read_sav() now correctly recognises EDATE and JDATE formats as dates. Variables with format DATE, ADATE, EDATE, JDATE or SDATE are imported as Date variables instead of POSIXct.
  • write_dta() and write_sav() support writing date/times.
  • Support for hms() has been moved into the hms package. Time varibles now have class c("hms", "difftime") and a unitsattribute with value “secs”.

Other metadata

Haven is slowly adding support for other types of metadata:

  • Variable formats can be read and written. Similarly to to variable labels, formats are stored as an attribute on the vector. Use zap_formats() if you want to remove these attributes.
  • Added support for reading file “label” and “notes”. These are not currently printed, but are stored in the attributes if you need to access them.

I’m planning to release ggplot2 2.2.0 in early November. In preparation, I’d like to announce that a release candidate is now available: version Please try it out, and file an issue on GitHub if you discover any problems. I hope we can find and fix any major issues before the official release.

Install the pre-release version with:

# install.packages("devtools")

If you discover a major bug that breaks your plots, please file a minimal reprex, and then roll back to the released version with:


ggplot2 2.2.0 will be a relatively major release including:

The majority of this work was carried out by Thomas Pedersen, who I was lucky to have as my “ggplot2 intern” this summer. Make sure to check out other visualisation packages: ggraphggforce, and tweenr.

Subtitles and captions

Thanks to Bob Rudis, you can now add subtitles and captions:

ggplot(mpg, aes(displ, hwy)) +
  geom_point(aes(color = class)) +
  geom_smooth(se = FALSE, method = "loess") +
    title = "Fuel efficiency generally decreases with engine size",
    subtitle = "Two seaters (sports cars) are an exception because of their light weight",
    caption = "Data from"

These are controlled by the theme settings plot.subtitle and plot.caption.

The plot title is now aligned to the left by default. To return to the previous centering, use theme(plot.title = element_text(hjust = 0.5)).


The facet and layout implementation has been moved to ggproto and received a large rewrite and refactoring. This will allow others to create their own facetting systems, as descrbied in the Extending ggplot2 vignette. Along with the rewrite a number of features and improvements has been added, most notably:

  • Functions in facetting formulas, thanks to Dan Ruderman.
    ggplot(diamonds, aes(carat, price)) + 
      geom_hex(bins = 20) + 
      facet_wrap(~cut_number(depth, 6))
  • Axes were dropped when the panels in facet_wrap() did not completely fill the rectangle. Now, an axis is drawn underneath the hanging panels:
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 


  • It is now possible to set the position of the axes through the position argument in the scale constructor:
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      scale_x_continuous(position = "top") + 
      scale_y_continuous(position = "right")


  • You can display a secondary axis that is a one-to-one transformation of the primary axis with the sec.axis argument:
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
        "mpg (US)", 
        sec.axis = sec_axis(~ . * 1.20, name = "mpg (UK)")


  • Strips can be placed on any side, and the placement with respect to axes can be controlled with the strip.placement theme option.
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      facet_wrap(~ drv, strip.position = "bottom") + 
        strip.placement = "outside",
        strip.background = element_blank(),
        strip.text = element_text(face = "bold")
      ) +



  • Blank elements can now be overridden again so you get the expected behavior when setting e.g. axis.line.x.
  • element_line() gets an arrow argument that lets you put arrows on axes.
    arrow <- arrow(length = unit(0.4, "cm"), type = "closed")
    ggplot(mpg, aes(displ, hwy)) + 
      geom_point() + 
      theme_minimal() + 
        axis.line = element_line(arrow = arrow)


  • Control of legend styling has been improved. The whole legend area can be aligned according to the plot area and a box can be drawn around all legends:
    ggplot(mpg, aes(displ, hwy, shape = drv, colour = fl)) + 
      geom_point() + 
        legend.justification = "top", = margin(3, 3, 3, 3, "mm"), = element_rect(colour = "grey50")


  • panel.margin and legend.margin have been renamed to panel.spacing and legend.spacing respectively as this better indicates their roles. A new legend.margin has been actually controls the margin around each legend.
  • When computing the height of titles ggplot2, now inclues the height of the descenders (i.e. the bits g and y that hang underneath). This makes improves the margins around titles, particularly the y axis label. I have also very slightly increased the inner margins of axis titles, and removed the outer margins.
  • The default themes has been tweaked by Jean-Olivier Irisson making them better match theme_grey().
  • Lastly, the theme() function now has named arguments so autocomplete and documentation suggestions are vastly improved.

Stacking bars

position_stack() and position_fill() now stack values in the reverse order of the grouping, which makes the default stack order match the legend.

avg_price <- diamonds %>% 
  group_by(cut, color) %>% 
  summarise(price = mean(price)) %>% 
  ungroup() %>% 
  mutate(price_rel = price - mean(price))

ggplot(avg_price) + 
  geom_col(aes(x = cut, y = price, fill = color))


(Note also the new geom_col() which is short-hand for geom_bar(stat = "identity"), contributed by Bob Rudis.)

Additionally, you can now stack negative values:

ggplot(avg_price) + 
  geom_col(aes(x = cut, y = price_rel, fill = color))


The overall ordering cannot necessarily be matched in the presence of negative values, but the ordering on either side of the x-axis will match.

If you want to stack in the opposite order, try forcats::fct_rev():

ggplot(avg_price) + 
  geom_col(aes(x = cut, y = price, fill = fct_rev(color)))

The tidyverse is a set of packages that work in harmony because they share common data representations and API design. The tidyverse package is designed to make it easy to install and load core packages from the tidyverse in a single command.

The best place to learn about all the packages in the tidyverse and how they fit together is R for Data Science. Expect to hear more about the tidyverse in the coming months as I work on improved package websites, making citation easier, and providing a common home for discussions about data analysis with the tidyverse.


You can install tidyverse with


This will install the core tidyverse packages that you are likely to use in almost every analysis:

  • ggplot2, for data visualisation.
  • dplyr, for data manipulation.
  • tidyr, for data tidying.
  • readr, for data import.
  • purrr, for functional programming.
  • tibble, for tibbles, a modern re-imagining of data frames.

It also installs a selection of other tidyverse packages that you’re likely to use frequently, but probably not in every analysis. This includes packages for data manipulation:

Data import:

  • DBI, for databases.
  • haven, for SPSS, SAS and Stata files.
  • httr, for web apis.
  • jsonlite for JSON.
  • readxl, for .xls and .xlsx files.
  • rvest, for web scraping.
  • xml2, for XML.

And modelling:

  • modelr, for simple modelling within a pipeline
  • broom, for turning models into tidy data

These packages will be installed along with tidyverse, but you’ll load them explicitly with library().


library(tidyverse) will load the core tidyverse packages: ggplot2, tibble, tidyr, readr, purrr, and dplyr. You also get a condensed summary of conflicts with other packages you have loaded:

#> Loading tidyverse: ggplot2
#> Loading tidyverse: tibble
#> Loading tidyverse: tidyr
#> Loading tidyverse: readr
#> Loading tidyverse: purrr
#> Loading tidyverse: dplyr
#> Conflicts with tidy packages ---------------------------------------
#> filter(): dplyr, stats
#> lag():    dplyr, stats

You can see conflicts created later with tidyverse_conflicts():

#> Attaching package: 'MASS'
#> The following object is masked from 'package:dplyr':
#>     select
#> Conflicts with tidy packages --------------------------------------
#> filter(): dplyr, stats
#> lag():    dplyr, stats
#> select(): dplyr, MASS

And you can check that all tidyverse packages are up-to-date with tidyverse_update():

#> The following packages are out of date:
#>  * broom (0.4.0 -> 0.4.1)
#>  * DBI   (0.4.1 -> 0.5)
#>  * Rcpp  (0.12.6 -> 0.12.7)
#> Update now?
#> 1: Yes
#> 2: No

This release includes a range of bug fixes and minor improvements. Some highlights from this release include:

  • period() and duration() constructors now accept character strings and allow a very flexible specification of timespans:
    period("3H 2M 1S")
    #> [1] "3H 2M 1S"
    duration("3 hours, 2 mins, 1 secs")
    #> [1] "10921s (~3.03 hours)"
    # Missing numerals default to 1. 
    # Repeated units are summed
    period("hour minute minute")
    #> [1] "1H 2M 0S"

    Period and duration parsing allows for arbitrary abbreviations of time units as long as the specification is unambiguous. For single letter specs, strptime() rules are followed, so m stands for months and M for minutes.

    These same rules allows you to compare strings and durations/periods:

    "2mins 1 sec" > period("2mins")
    #> [1] TRUE
  • Date time rounding (with round_date()floor_date() and ceiling_date()) now supports unit multipliers, like “3 days” or “2 months”:
    ceiling_date(ymd_hms("2016-09-12 17:10:00"), unit = "5 minutes")
    #> [1] "2016-09-12 17:10:00 UTC"
  • The behavior of ceiling_date for Date objects is now more intuitive. In short, dates are now interpreted as time intervals that are physically part of longer unit intervals:
    |day1| ... |day31|day1| ... |day28| ...
    |    January     |   February     | ...

    That means that rounding up 2000-01-01 by a month is done to the boundary between January and February which, i.e. 2000-02-01:

    ceiling_date(ymd("2000-01-01"), unit = "month")
    #> [1] "2000-02-01"

    This behavior is controlled by the change_on_boundary argument.

  • It is now possible to compare POSIXct and Date objects:
    ymd_hms("2000-01-01 00:00:01") > ymd("2000-01-01")
    #> [1] TRUE
  • C-level parsing now handles English months and AM/PM indicator regardless of your locale. This means that English date-times are now always handled by lubridate C-level parsing and you don’t need to explicitly switch the locale.
  • New parsing function yq() allows you to parse a year + quarter:
    #> [1] "2016-04-01"

    The new q format is available in all lubridate parsing functions.

See the release notes for the full list of changes. A big thanks goes to everyone who contributed: @arneschillert, @cderv, @ijlyttle, @jasonelaw, @jonboiser, and @krlmlr.

If you use packages from the tidyverse (like tibble and readr) you don’t need to worry about getting factors when you don’t want them. But factors are a useful data structure in their own right, particularly for modelling and visualisation, because they allow you to control the order of the levels. Working with factors in base R can be a little frustrating because of a handful of missing tools. The goal of forcats is to fill in those missing pieces so you can access the power of factors with a minimum of pain.

Install forcats with:


forcats provides two main types of tools to change either the values or the order of the levels. I’ll call out some of the most important functions below, using using the included gss_cat dataset which contains a selection of categorical variables from the General Social Survey.


#> # A tibble: 21,483 × 9
#>    year       marital   age   race        rincome            partyid
#>   <int>        <fctr> <int> <fctr>         <fctr>             <fctr>
#> 1  2000 Never married    26  White  $8000 to 9999       Ind,near rep
#> 2  2000      Divorced    48  White  $8000 to 9999 Not str republican
#> 3  2000       Widowed    67  White Not applicable        Independent
#> 4  2000 Never married    39  White Not applicable       Ind,near rep
#> 5  2000      Divorced    25  White Not applicable   Not str democrat
#> 6  2000       Married    25  White $20000 - 24999    Strong democrat
#> # ... with 2.148e+04 more rows, and 3 more variables: relig <fctr>,
#> #   denom <fctr>, tvhours <int>

Change level values

You can recode specified factor levels with fct_recode():

gss_cat %>% count(partyid)
#> # A tibble: 10 × 2
#>              partyid     n
#>               <fctr> <int>
#> 1          No answer   154
#> 2         Don't know     1
#> 3        Other party   393
#> 4  Strong republican  2314
#> 5 Not str republican  3032
#> 6       Ind,near rep  1791
#> # ... with 4 more rows

gss_cat %>%
  mutate(partyid = fct_recode(partyid,
    "Republican, strong"    = "Strong republican",
    "Republican, weak"      = "Not str republican",
    "Independent, near rep" = "Ind,near rep",
    "Independent, near dem" = "Ind,near dem",
    "Democrat, weak"        = "Not str democrat",
    "Democrat, strong"      = "Strong democrat"
  )) %>%
#> # A tibble: 10 × 2
#>                 partyid     n
#>                  <fctr> <int>
#> 1             No answer   154
#> 2            Don't know     1
#> 3           Other party   393
#> 4    Republican, strong  2314
#> 5      Republican, weak  3032
#> 6 Independent, near rep  1791
#> # ... with 4 more rows

Note that unmentioned levels are left as is, and the order of the levels is preserved.

fct_lump() allows you to lump the rarest (or most common) levels in to a new “other” level. The default behaviour is to collapse the smallest levels in to other, ensuring that it’s still the smallest level. For the religion variable that tells us that Protestants out number all other religions, which is interesting, but we probably want more level.

gss_cat %>% 
  mutate(relig = fct_lump(relig)) %>% 
#> # A tibble: 2 × 2
#>        relig     n
#>       <fctr> <int>
#> 1      Other 10637
#> 2 Protestant 10846

Alternatively you can supply a number of levels to keep, n, or minimum proportion for inclusion, prop. If you use negative values, fct_lump()will change direction, and combine the most common values while preserving the rarest.

gss_cat %>% 
  mutate(relig = fct_lump(relig, n = 5)) %>% 
#> # A tibble: 6 × 2
#>        relig     n
#>       <fctr> <int>
#> 1      Other   913
#> 2  Christian   689
#> 3       None  3523
#> 4     Jewish   388
#> 5   Catholic  5124
#> 6 Protestant 10846

gss_cat %>% 
  mutate(relig = fct_lump(relig, prop = -0.10)) %>% 
#> # A tibble: 12 × 2
#>                     relig     n
#>                    <fctr> <int>
#> 1               No answer    93
#> 2              Don't know    15
#> 3 Inter-nondenominational   109
#> 4         Native american    23
#> 5               Christian   689
#> 6      Orthodox-christian    95
#> # ... with 6 more rows

Change level order

There are four simple helpers for common operations:

  • fct_relevel() is similar to stats::relevel() but allows you to move any number of levels to the front.
  • fct_inorder() orders according to the first appearance of each level.
  • fct_infreq() orders from most common to rarest.
  • fct_rev() reverses the order of levels.

fct_reorder() and fct_reorder2() are useful for visualisations. fct_reorder() reorders the factor levels by another variable. This is useful when you map a categorical variable to position, as shown in the following example which shows the average number of hours spent watching television across religions.

relig <- gss_cat %>%
  group_by(relig) %>%
    age = mean(age, na.rm = TRUE),
    tvhours = mean(tvhours, na.rm = TRUE),
    n = n()

ggplot(relig, aes(tvhours, relig)) + geom_point()
reorder-1ggplot(relig, aes(tvhours, fct_reorder(relig, tvhours))) +


fct_reorder2() extends the same idea to plots where a factor is mapped to another aesthetic, like colour. The defaults are designed to make legends easier to read for line plots, as shown in the following example looking at marital status by age.

by_age <- gss_cat %>%
  filter(! %>%
  group_by(age, marital) %>%
  count() %>%
  mutate(prop = n / sum(n))

ggplot(by_age, aes(age, prop)) +
  geom_line(aes(colour = marital))
reorder2-1ggplot(by_age, aes(age, prop)) +
  geom_line(aes(colour = fct_reorder2(marital, age, prop))) +
  labs(colour = "marital")

Learning more

You can learn more about forcats in R for data science, and on the forcats website.

Please let me know if you have more factor problems that forcats doesn’t help with!

We’re proud to announce version 1.2.0 of the tibble package. Tibbles are a modern reimagining of the data frame, keeping what time has shown to be effective, and throwing out what is not. Grab the latest version with:


This is mostly a maintenance release, with the following major changes:

  • More options for adding individual rows and (new!) columns
  • Improved function names
  • Minor tweaks to the output

There are many other small improvements and bug fixes: please see the release notes for a complete list.

Thanks to Jenny Bryan for add_row() and add_column() improvements and ideas, to William Dunlap for pointing out a bug with tibble’s implementation of all.equal(), to Kevin Wright for pointing out a rare bug with glimpse(), and to all the other contributors. Use the issue tracker to submit bugs or suggest ideas, your contributions are always welcome.

Adding rows and columns

There are now more options for adding individual rows, and columns can be added in a similar way, illustrated with this small tibble:

df <- tibble(x = 1:3, y = 3:1)
#> # A tibble: 3 × 2
#>       x     y
#>   <int> <int>
#> 1     1     3
#> 2     2     2
#> 3     3     1

The add_row() function allows control over where the new rows are added. In the following example, the row (4, 0) is added before the second row:

df %>% 
  add_row(x = 4, y = 0, .before = 2)
#> # A tibble: 4 × 2
#>       x     y
#>   <dbl> <dbl>
#> 1     1     3
#> 2     4     0
#> 3     2     2
#> 4     3     1

Adding more than one row is now fully supported, although not recommended in general because it can be a bit hard to read.

df %>% 
  add_row(x = 4:5, y = 0:-1)
#> # A tibble: 5 × 2
#>       x     y
#>   <int> <int>
#> 1     1     3
#> 2     2     2
#> 3     3     1
#> 4     4     0
#> 5     5    -1

Columns can now be added in much the same way with the new add_column() function:

df %>% 
  add_column(z = -1:1, w = 0)
#> # A tibble: 3 × 4
#>       x     y     z     w
#>   <int> <int> <int> <dbl>
#> 1     1     3    -1     0
#> 2     2     2     0     0
#> 3     3     1     1     0

It also supports .before and .after arguments:

df %>% 
  add_column(z = -1:1, .after = 1)
#> # A tibble: 3 × 3
#>       x     z     y
#>   <int> <int> <int>
#> 1     1    -1     3
#> 2     2     0     2
#> 3     3     1     1

df %>%  
  add_column(w = 0:2, .before = "x")
#> # A tibble: 3 × 3
#>       w     x     y
#>   <int> <int> <int>
#> 1     0     1     3
#> 2     1     2     2
#> 3     2     3     1

The add_column() function will never alter your existing data: you can’t overwrite existing columns, and you can’t add new observations.

Function names

frame_data() is now tribble(), which stands for “transposed tibble”. The old name still works, but will be deprecated eventually.

  ~x, ~y,
   1, "a",
   2, "z"
#> # A tibble: 2 × 2
#>       x     y
#>   <dbl> <chr>
#> 1     1     a
#> 2     2     z

Output tweaks

We’ve tweaked the output again to use the multiply character × instead of x when printing dimensions (this still renders nicely on Windows.) We surround non-semantic column with backticks, and dttm is now used instead of time to distinguish POSIXt and hms (or difftime) values.

The example below shows the new rendering:

tibble(`date and time` = Sys.time(), time = hms::hms(minutes = 3))
#> # A tibble: 1 × 2
#>       `date and time`     time
#>                <dttm>   <time>
#> 1 2016-08-29 16:48:57 00:03:00

Expect the printed output to continue to evolve in next release. Stay tuned for a new function that reconstructs tribble() calls from existing data frames.

I’m pleased to announce version 1.1.0 of stringr. stringr makes string manipulation easier by using consistent function and argument names, and eliminating options that you don’t need 95% of the time. To get started with stringr, check out the strings chapter in R for data science. Install it with:


This release is mostly bug fixes, but there are a couple of new features you might care out.

  • There are three new datasets, fruitwords and sentences, to help you practice your regular expression skills:
    str_subset(fruit, "(..)\\1")
    #> [1] "banana"      "coconut"     "cucumber"    "jujube"      "papaya"     
    #> [6] "salal berry"
    #> [1] "a"        "able"     "about"    "absolute" "accept"   "account"
    #> [1] "The birch canoe slid on the smooth planks."
  • More functions work with boundary()str_detect() and str_subset() can detect boundaries, and str_extract() and str_extract_all() pull out the components between boundaries. This is particularly useful if you want to extract logical constructs like words or sentences.
    x <- "This is harder than you might expect, e.g. punctuation!"
    x %>% str_extract_all(boundary("word")) %>% .[[1]]
    #> [1] "This"        "is"          "harder"      "than"        "you"        
    #> [6] "might"       "expect"      "e.g"         "punctuation"
    x %>% str_extract(boundary("sentence"))
    #> [1] "This is harder than you might expect, e.g. punctuation!"
  • str_view() and str_view_all() create HTML widgets that display regular expression matches. This is particularly useful for teaching.

For a complete list of changes, please see the release notes.

I’m pleased to announce tidyr 0.6.0. tidyr makes it easy to “tidy” your data, storing it in a consistent form so that it’s easy to manipulate, visualise and model. Tidy data has a simple convention: put variables in the columns and observations in the rows. You can learn more about it in the tidy data vignette. Install it with:


I mostly released this version to bundle up a number of small tweaks needed for R for Data Science. But there’s one nice new feature, contributed by Jan Schulzdrop_na()drop_na()drops rows containing missing values:

df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b"))
#> # A tibble: 3 × 2
#>       x     y
#>   <dbl> <chr>
#> 1     1     a
#> 2     2  <NA>
#> 3    NA     b

# Called without arguments, it drops rows containing
# missing values in any variable:
df %>% drop_na()
#> # A tibble: 1 × 2
#>       x     y
#>   <dbl> <chr>
#> 1     1     a

# Or you can restrict the variables it looks at, 
# using select() style syntax:
df %>% drop_na(x)
#> # A tibble: 2 × 2
#>       x     y
#>   <dbl> <chr>
#> 1     1     a
#> 2     2  <NA>

Please see the release notes for a complete list of changes.

readr 1.0.0 is now available on CRAN. readr makes it easy to read many types of rectangular data, including csv, tsv and fixed width files. Compared to base equivalents like read.csv(), readr is much faster and gives more convenient output: it never converts strings to factors, can parse date/times, and it doesn’t munge the column names. Install the latest version with:


Releasing a version 1.0.0 was a deliberate choice to reflect the maturity and stability and readr, thanks largely to work by Jim Hester. readr is by no means perfect, but I don’t expect any major changes to the API in the future.

In this version we:

  • Use a better strategy for guessing column types.
  • Improved the default date and time parsers.
  • Provided a full set of lower-level file and line readers and writers.
  • Fixed many bugs.

Column guessing

The process by which readr guesses the types of columns has received a substantial overhaul to make it easier to fix problems when the initial guesses aren’t correct, and to make it easier to generate reproducible code. Now column specifications are printing by default when you read from a file:

mtcars2 <- read_csv(readr_example("mtcars.csv"))
#> Parsed with column specification:
#> cols(
#>   mpg = col_double(),
#>   cyl = col_integer(),
#>   disp = col_double(),
#>   hp = col_integer(),
#>   drat = col_double(),
#>   wt = col_double(),
#>   qsec = col_double(),
#>   vs = col_integer(),
#>   am = col_integer(),
#>   gear = col_integer(),
#>   carb = col_integer()
#> )

The thought is that once you’ve figured out the correct column types for a file, you should make the parsing strict. You can do this either by copying and pasting the printed column specification or by saving the spec to disk:

# Once you've figured out the correct types
mtcars_spec <- write_rds(spec(mtcars2), "mtcars2-spec.rds")

# Every subsequent load
mtcars2 <- read_csv(
  col_types = read_rds("mtcars2-spec.rds")
# In production, you might want to throw an error if there
# are any parsing problems.

You can now also adjust the number of rows that readr uses to guess the column types with guess_max:

challenge <- read_csv(readr_example("challenge.csv"))
#> Parsed with column specification:
#> cols(
#>   x = col_integer(),
#>   y = col_character()
#> )
#> Warning: 1000 parsing failures.
#>  row col               expected             actual
#> 1001   x no trailing characters .23837975086644292
#> 1002   x no trailing characters .41167997173033655
#> 1003   x no trailing characters .7460716762579978 
#> 1004   x no trailing characters .723450553836301  
#> 1005   x no trailing characters .614524137461558  
#> .... ... ...................... ..................
#> See problems(...) for more details.
challenge <- read_csv(readr_example("challenge.csv"), guess_max = 1500)
#> Parsed with column specification:
#> cols(
#>   x = col_double(),
#>   y = col_date(format = "")
#> )

(If you want to suppress the printed specification, just provide the dummy spec col_types = cols())

You can now access the guessing algorithm from R: guess_parser() will tell you which parser readr will select.

#> [1] "number"

# Were previously guessed as numbers
guess_parser(c(".", "-"))
#> [1] "character"
guess_parser(c("10W", "20N"))
#> [1] "character"

# Now uses the default time format
#> [1] "time"

Date-time parsing improvements:

The date time parsers recognise three new format strings:

  • %I for 12 hour time format:
    parse_time("1 pm", "%I %p")
    #> 13:00:00

    Note that parse_time() returns hms from the hms package, rather than a custom time class

  • %AD and %AT are “automatic” date and time parsers. They are both slightly less flexible than previous defaults. The automatic date parser requires a four digit year, and only accepts - and / as separators. The flexible time parser now requires colons between hours and minutes and optional seconds.
    parse_date("2010-01-01", "%AD")
    #> [1] "2010-01-01"
    parse_time("15:01", "%AT")
    #> 15:01:00

If the format argument is omitted in parse_date() or parse_time(), the default date and time formats specified in the locale will be used. These now default to %AD and %AT respectively. You may want to override in your standard locale() if the conventions are different where you live.

Low-level readers and writers

readr now contains a full set of efficient lower-level readers:

  • read_file() reads a file into a length-1 character vector; read_file_raw() reads a file into a single raw vector.
  • read_lines() reads a file into a character vector with one entry per line; read_lines_raw() reads into a list of raw vectors with one entry per line.

These are paired with write_lines() and write_file() to efficient write character and raw vectors back to disk.

Other changes

  • read_fwf() was overhauled to reliably read only a partial set of columns, to read files with ragged final columns (by setting the final position/width to NA), and to skip comments (with the comment argument).
  • readr contains an experimental API for reading a file in chunks, e.g. read_csv_chunked() and read_lines_chunked(). These allow you to work with files that are bigger than memory. We haven’t yet finalised the API so please use with care, and send us your feedback.
  • There are many otherbug fixes and other minor improvements. You can see a complete list in the release notes.

A big thanks goes to all the community members who contributed to this release: @antoine-lizee, @fpinter, @ghaarsma, @jennybc, @jeroenooms, @leeper, @LluisRamon, @noamross, and @tvedebrink.