You are currently browsing the category archive for the ‘tidyverse’ category.
rstudio::conf 2017, the conference on all things R and RStudio, is only 90 days away. Now is the time to claim your spot or grab one of the few remaining seats at Training Days – including the new Tidyverse workshop.
Whether you’re already registered or still working on it, we’re delighted today to announce the full conference schedule, so that you can plan your days in Florida.
rstudio::conf 2017 takes place January 12-14 at the Gaylord Resorts in Kissimmee, Florida. There are over 30 talks and tutorials to choose from that are sure to accelerate your productivity in R and RStudio. In addition to the highlights below, topics include the latest news on R notebooks, sparklyr, profiling, the tidyverse, shiny, r markdown, html widgets, data access and the new enterprise-scale publishing capabilities of RStudio Connect.
– Hadley Wickham, Chief Scientist, RStudio: Data Science in the Tidyverse
– Andrew Flowers, Economics Writer, FiveThirtyEight: Finding and Telling Stories with R
– J.J. Allaire, Software Engineer, CEO & Founder: RStudio Past, Present and Future
– Winston Chang, Software Engineer, RStudio: Building Dashboards with Shiny
– Charlotte Wickham, Oregon State University: Happy R Users Purrr
– Yihui Xie, Software Engineer, RStudio: Advanced R Markdown
– Jenny Bryan, University of British Columbia: Happy Git and GitHub for the UseR
– Max Kuhn, Senior Director Non-Clinical Statistics, Pfizer
– Dirk Eddelbuettel, Ketchum Trading: Extending R with C++: A Brief Introduction to Rcpp
– Hilary Parker, Stitch Fix: Opinionated Analysis Development“
– Bryan Lewis, Paradigm4: “Fun with htmlwidgets”
– Ryan Hafen, Hafen Consulting: “Interactive plotting with rbokeh and crosstalk”
– Julia Silge, Datassist: “Text mining, the tidy way”
– Bob Rudis, Rapid7: “Writing readable code with pipes”
– Joseph Rickert, R Ambassador, RStudio: R’s Role in Data Science
Special Reminder: When you register, make sure you purchase your ticket for Friday evening at Universal’s Wizarding World of Harry Potter. The park is reserved exclusively for rstudio::conf attendees. It’s an extraordinary experience we’re sure you’ll enjoy!
We appreciate our sponsors and exhibitors!
I’m pleased to announce the release of haven. Haven is designed to faciliate the transfer of data between R and SAS, SPSS, and Stata. It makes it easy to read SAS, SPSS, and Stata file formats in to R data frames, and makes it easy to save your R data frames in to SAS, SPSS, and Stata if you need to collaborate with others using closed source statistical software. Install haven by running:
haven 1.0.0 is a major release, and indicates that haven is now largely feature complete and has been tested on many real world datasets. There are four major changes in this version of haven:
- Improvements to the underlying ReadStat library
- Better handling of “special” missing values
- Improved date/time support
- Support for other file metadata.
There were also a whole bunch of other minor improvements and bug fixes: you can see the complete list in the release notes.
- Can read binary/Ross compressed SAS files.
- Support for reading and writing Stata 14 data files.
write_sas()allows you to write data frames out to
sas7bdatfiles. This is still somewhat experimental.
read_por()now actually works.
- Many other bug fixes and minor improvements.
haven 1.0.0 includes comprehensive support for the “special” types of missing values found in SAS, SPSS, and Stata. All three tools provide a global “system missing value”, displayed as
.. This is roughly equivalent to R’s
NA, although neither Stata nor SAS propagate missingness in numeric comparisons (SAS treats the missing value as the smallest possible number and Stata treats it as the largest possible number).
Each tool also provides a mechanism for recording multiple types of missingness:
- Stata has “extended” missing values,
- SAS has “special” missing values,
- SPSS has per-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.
Stata and SAS only support tagged missing values for numeric columns. SPSS supports up to three distinct values for character columns. Generally, operations involving a user-missing type return a system missing value.
Haven models these missing values in two different ways:
- For SAS and Stata, haven provides
tagged_na()which extend R’s regular
NAto add a single character label.
- For SPSS, haven provides
labelled_spss()that also models user defined values and ranges.
zap_missing() if you just want to convert to R’s regular
You can get more details in the semantics vignette.
Support for date/times has substantially improved:
read_dta()now recognises “%d” and custom date types.
read_sav()now correctly recognises EDATE and JDATE formats as dates. Variables with format DATE, ADATE, EDATE, JDATE or SDATE are imported as
Datevariables instead of
write_sav()support writing date/times.
- Support for
hms()has been moved into the hms package. Time varibles now have class
c("hms", "difftime")and a
unitsattribute with value “secs”.
Haven is slowly adding support for other types of metadata:
- Variable formats can be read and written. Similarly to to variable labels, formats are stored as an attribute on the vector. Use
zap_formats()if you want to remove these attributes.
- Added support for reading file “label” and “notes”. These are not currently printed, but are stored in the attributes if you need to access them.
I’m planning to release ggplot2 2.2.0 in early November. In preparation, I’d like to announce that a release candidate is now available: version 18.104.22.16801. Please try it out, and file an issue on GitHub if you discover any problems. I hope we can find and fix any major issues before the official release.
Install the pre-release version with:
# install.packages("devtools") devtools::install_github("hadley/ggplot2")
If you discover a major bug that breaks your plots, please file a minimal reprex, and then roll back to the released version with:
ggplot2 2.2.0 will be a relatively major release including:
- Subtitles and captions.
- A large rewrite of the facetting system.
- Improved theme options.
- Better stacking
- Numerous bug fixes and minor improvements.
The majority of this work was carried out by Thomas Pedersen, who I was lucky to have as my “ggplot2 intern” this summer. Make sure to check out other visualisation packages: ggraph, ggforce, and tweenr.
The facet and layout implementation has been moved to ggproto and received a large rewrite and refactoring. This will allow others to create their own facetting systems, as descrbied in the Extending ggplot2 vignette. Along with the rewrite a number of features and improvements has been added, most notably:
- Functions in facetting formulas, thanks to Dan Ruderman.
- Axes were dropped when the panels in
facet_wrap()did not completely fill the rectangle. Now, an axis is drawn underneath the hanging panels:
ggplot(mpg, aes(displ, hwy)) + geom_point() + facet_wrap(~class)
- It is now possible to set the position of the axes through the
positionargument in the scale constructor:
ggplot(mpg, aes(displ, hwy)) + geom_point() + scale_x_continuous(position = "top") + scale_y_continuous(position = "right")
- You can display a secondary axis that is a one-to-one transformation of the primary axis with the
ggplot(mpg, aes(displ, hwy)) + geom_point() + scale_y_continuous( "mpg (US)", sec.axis = sec_axis(~ . * 1.20, name = "mpg (UK)") )
- Strips can be placed on any side, and the placement with respect to axes can be controlled with the
ggplot(mpg, aes(displ, hwy)) + geom_point() + facet_wrap(~ drv, strip.position = "bottom") + theme( strip.placement = "outside", strip.background = element_blank(), strip.text = element_text(face = "bold") ) + xlab(NULL)
- Blank elements can now be overridden again so you get the expected behavior when setting e.g.
arrowargument that lets you put arrows on axes.
arrow <- arrow(length = unit(0.4, "cm"), type = "closed") ggplot(mpg, aes(displ, hwy)) + geom_point() + theme_minimal() + theme( axis.line = element_line(arrow = arrow) )
- Control of legend styling has been improved. The whole legend area can be aligned according to the plot area and a box can be drawn around all legends:
ggplot(mpg, aes(displ, hwy, shape = drv, colour = fl)) + geom_point() + theme( legend.justification = "top", legend.box.margin = margin(3, 3, 3, 3, "mm"), legend.box.background = element_rect(colour = "grey50") )
legend.marginhave been renamed to
legend.spacingrespectively as this better indicates their roles. A new
legend.marginhas been actually controls the margin around each legend.
- When computing the height of titles ggplot2, now inclues the height of the descenders (i.e. the bits
ythat hang underneath). This makes improves the margins around titles, particularly the y axis label. I have also very slightly increased the inner margins of axis titles, and removed the outer margins.
- The default themes has been tweaked by Jean-Olivier Irisson making them better match
- Lastly, the
theme()function now has named arguments so autocomplete and documentation suggestions are vastly improved.
position_fill() now stack values in the reverse order of the grouping, which makes the default stack order match the legend.
avg_price <- diamonds %>% group_by(cut, color) %>% summarise(price = mean(price)) %>% ungroup() %>% mutate(price_rel = price - mean(price)) ggplot(avg_price) + geom_col(aes(x = cut, y = price, fill = color))
(Note also the new
geom_col() which is short-hand for
geom_bar(stat = "identity"), contributed by Bob Rudis.)
Additionally, you can now stack negative values:
ggplot(avg_price) + geom_col(aes(x = cut, y = price_rel, fill = color))
The overall ordering cannot necessarily be matched in the presence of negative values, but the ordering on either side of the x-axis will match.
If you want to stack in the opposite order, try
The tidyverse is a set of packages that work in harmony because they share common data representations and API design. The tidyverse package is designed to make it easy to install and load core packages from the tidyverse in a single command.
The best place to learn about all the packages in the tidyverse and how they fit together is R for Data Science. Expect to hear more about the tidyverse in the coming months as I work on improved package websites, making citation easier, and providing a common home for discussions about data analysis with the tidyverse.
You can install tidyverse with
This will install the core tidyverse packages that you are likely to use in almost every analysis:
- ggplot2, for data visualisation.
- dplyr, for data manipulation.
- tidyr, for data tidying.
- readr, for data import.
- purrr, for functional programming.
- tibble, for tibbles, a modern re-imagining of data frames.
It also installs a selection of other tidyverse packages that you’re likely to use frequently, but probably not in every analysis. This includes packages for data manipulation:
- DBI, for databases.
- haven, for SPSS, SAS and Stata files.
- httr, for web apis.
- jsonlite for JSON.
- readxl, for
- rvest, for web scraping.
- xml2, for XML.
These packages will be installed along with tidyverse, but you’ll load them explicitly with
library(tidyverse) will load the core tidyverse packages: ggplot2, tibble, tidyr, readr, purrr, and dplyr. You also get a condensed summary of conflicts with other packages you have loaded:
library(tidyverse) #> Loading tidyverse: ggplot2 #> Loading tidyverse: tibble #> Loading tidyverse: tidyr #> Loading tidyverse: readr #> Loading tidyverse: purrr #> Loading tidyverse: dplyr #> Conflicts with tidy packages --------------------------------------- #> filter(): dplyr, stats #> lag(): dplyr, stats
You can see conflicts created later with
library(MASS) #> #> Attaching package: 'MASS' #> The following object is masked from 'package:dplyr': #> #> select tidyverse_conflicts() #> Conflicts with tidy packages -------------------------------------- #> filter(): dplyr, stats #> lag(): dplyr, stats #> select(): dplyr, MASS
And you can check that all tidyverse packages are up-to-date with
tidyverse_update() #> The following packages are out of date: #> * broom (0.4.0 -> 0.4.1) #> * DBI (0.4.1 -> 0.5) #> * Rcpp (0.12.6 -> 0.12.7) #> Update now? #> #> 1: Yes #> 2: No
I am pleased to announced lubridate 1.6.0. Lubridate is designed to make working with dates and times as pleasant as possible, and is maintained by Vitalie Spinu. You can install the latest version with:
This release includes a range of bug fixes and minor improvements. Some highlights from this release include:
duration()constructors now accept character strings and allow a very flexible specification of timespans:
period("3H 2M 1S") #>  "3H 2M 1S" duration("3 hours, 2 mins, 1 secs") #>  "10921s (~3.03 hours)" # Missing numerals default to 1. # Repeated units are summed period("hour minute minute") #>  "1H 2M 0S"
Period and duration parsing allows for arbitrary abbreviations of time units as long as the specification is unambiguous. For single letter specs,
strptime()rules are followed, so
These same rules allows you to compare strings and durations/periods:
"2mins 1 sec" > period("2mins") #>  TRUE
- Date time rounding (with
ceiling_date()) now supports unit multipliers, like “3 days” or “2 months”:
ceiling_date(ymd_hms("2016-09-12 17:10:00"), unit = "5 minutes") #>  "2016-09-12 17:10:00 UTC"
- The behavior of
Dateobjects is now more intuitive. In short, dates are now interpreted as time intervals that are physically part of longer unit intervals:
|day1| ... |day31|day1| ... |day28| ... | January | February | ...
That means that rounding up
2000-01-01by a month is done to the boundary between January and February which, i.e.
ceiling_date(ymd("2000-01-01"), unit = "month") #>  "2000-02-01"
This behavior is controlled by the
- It is now possible to compare
ymd_hms("2000-01-01 00:00:01") > ymd("2000-01-01") #>  TRUE
- C-level parsing now handles English months and AM/PM indicator regardless of your locale. This means that English date-times are now always handled by lubridate C-level parsing and you don’t need to explicitly switch the locale.
- New parsing function
yq()allows you to parse a year + quarter:
yq("2016-02") #>  "2016-04-01"
qformat is available in all lubridate parsing functions.
I’m excited to announce forcats, a new package for categorical variables, or factors. Factors have a bad rap in R because they often turn up when you don’t want them. That’s because historically, factors were more convenient than character vectors, as discussed in stringsAsFactors: An unauthorized biography by Roger Peng, and stringsAsFactors = <sigh> by Thomas Lumley.
If you use packages from the tidyverse (like tibble and readr) you don’t need to worry about getting factors when you don’t want them. But factors are a useful data structure in their own right, particularly for modelling and visualisation, because they allow you to control the order of the levels. Working with factors in base R can be a little frustrating because of a handful of missing tools. The goal of forcats is to fill in those missing pieces so you can access the power of factors with a minimum of pain.
Install forcats with:
forcats provides two main types of tools to change either the values or the order of the levels. I’ll call out some of the most important functions below, using using the included
gss_cat dataset which contains a selection of categorical variables from the General Social Survey.
library(dplyr) library(ggplot2) library(forcats) gss_cat #> # A tibble: 21,483 × 9 #> year marital age race rincome partyid #> <int> <fctr> <int> <fctr> <fctr> <fctr> #> 1 2000 Never married 26 White $8000 to 9999 Ind,near rep #> 2 2000 Divorced 48 White $8000 to 9999 Not str republican #> 3 2000 Widowed 67 White Not applicable Independent #> 4 2000 Never married 39 White Not applicable Ind,near rep #> 5 2000 Divorced 25 White Not applicable Not str democrat #> 6 2000 Married 25 White $20000 - 24999 Strong democrat #> # ... with 2.148e+04 more rows, and 3 more variables: relig <fctr>, #> # denom <fctr>, tvhours <int>
Change level values
You can recode specified factor levels with
gss_cat %>% count(partyid) #> # A tibble: 10 × 2 #> partyid n #> <fctr> <int> #> 1 No answer 154 #> 2 Don't know 1 #> 3 Other party 393 #> 4 Strong republican 2314 #> 5 Not str republican 3032 #> 6 Ind,near rep 1791 #> # ... with 4 more rows gss_cat %>% mutate(partyid = fct_recode(partyid, "Republican, strong" = "Strong republican", "Republican, weak" = "Not str republican", "Independent, near rep" = "Ind,near rep", "Independent, near dem" = "Ind,near dem", "Democrat, weak" = "Not str democrat", "Democrat, strong" = "Strong democrat" )) %>% count(partyid) #> # A tibble: 10 × 2 #> partyid n #> <fctr> <int> #> 1 No answer 154 #> 2 Don't know 1 #> 3 Other party 393 #> 4 Republican, strong 2314 #> 5 Republican, weak 3032 #> 6 Independent, near rep 1791 #> # ... with 4 more rows
Note that unmentioned levels are left as is, and the order of the levels is preserved.
fct_lump() allows you to lump the rarest (or most common) levels in to a new “other” level. The default behaviour is to collapse the smallest levels in to other, ensuring that it’s still the smallest level. For the religion variable that tells us that Protestants out number all other religions, which is interesting, but we probably want more level.
gss_cat %>% mutate(relig = fct_lump(relig)) %>% count(relig) #> # A tibble: 2 × 2 #> relig n #> <fctr> <int> #> 1 Other 10637 #> 2 Protestant 10846
Alternatively you can supply a number of levels to keep,
n, or minimum proportion for inclusion,
prop. If you use negative values,
fct_lump()will change direction, and combine the most common values while preserving the rarest.
gss_cat %>% mutate(relig = fct_lump(relig, n = 5)) %>% count(relig) #> # A tibble: 6 × 2 #> relig n #> <fctr> <int> #> 1 Other 913 #> 2 Christian 689 #> 3 None 3523 #> 4 Jewish 388 #> 5 Catholic 5124 #> 6 Protestant 10846 gss_cat %>% mutate(relig = fct_lump(relig, prop = -0.10)) %>% count(relig) #> # A tibble: 12 × 2 #> relig n #> <fctr> <int> #> 1 No answer 93 #> 2 Don't know 15 #> 3 Inter-nondenominational 109 #> 4 Native american 23 #> 5 Christian 689 #> 6 Orthodox-christian 95 #> # ... with 6 more rows
Change level order
There are four simple helpers for common operations:
fct_relevel()is similar to
stats::relevel()but allows you to move any number of levels to the front.
fct_inorder()orders according to the first appearance of each level.
fct_infreq()orders from most common to rarest.
fct_rev()reverses the order of levels.
fct_reorder2() are useful for visualisations.
fct_reorder() reorders the factor levels by another variable. This is useful when you map a categorical variable to position, as shown in the following example which shows the average number of hours spent watching television across religions.
relig <- gss_cat %>% group_by(relig) %>% summarise( age = mean(age, na.rm = TRUE), tvhours = mean(tvhours, na.rm = TRUE), n = n() ) ggplot(relig, aes(tvhours, relig)) + geom_point() ggplot(relig, aes(tvhours, fct_reorder(relig, tvhours))) + geom_point()
fct_reorder2() extends the same idea to plots where a factor is mapped to another aesthetic, like colour. The defaults are designed to make legends easier to read for line plots, as shown in the following example looking at marital status by age.
We’re proud to announce version 1.2.0 of the tibble package. Tibbles are a modern reimagining of the data frame, keeping what time has shown to be effective, and throwing out what is not. Grab the latest version with:
This is mostly a maintenance release, with the following major changes:
- More options for adding individual rows and (new!) columns
- Improved function names
- Minor tweaks to the output
There are many other small improvements and bug fixes: please see the release notes for a complete list.
Thanks to Jenny Bryan for
add_column() improvements and ideas, to William Dunlap for pointing out a bug with tibble’s implementation of
all.equal(), to Kevin Wright for pointing out a rare bug with
glimpse(), and to all the other contributors. Use the issue tracker to submit bugs or suggest ideas, your contributions are always welcome.
Adding rows and columns
There are now more options for adding individual rows, and columns can be added in a similar way, illustrated with this small tibble:
df <- tibble(x = 1:3, y = 3:1) df #> # A tibble: 3 × 2 #> x y #> <int> <int> #> 1 1 3 #> 2 2 2 #> 3 3 1
add_row() function allows control over where the new rows are added. In the following example, the row (4, 0) is added before the second row:
df %>% add_row(x = 4, y = 0, .before = 2) #> # A tibble: 4 × 2 #> x y #> <dbl> <dbl> #> 1 1 3 #> 2 4 0 #> 3 2 2 #> 4 3 1
Adding more than one row is now fully supported, although not recommended in general because it can be a bit hard to read.
df %>% add_row(x = 4:5, y = 0:-1) #> # A tibble: 5 × 2 #> x y #> <int> <int> #> 1 1 3 #> 2 2 2 #> 3 3 1 #> 4 4 0 #> 5 5 -1
Columns can now be added in much the same way with the new
df %>% add_column(z = -1:1, w = 0) #> # A tibble: 3 × 4 #> x y z w #> <int> <int> <int> <dbl> #> 1 1 3 -1 0 #> 2 2 2 0 0 #> 3 3 1 1 0
It also supports
df %>% add_column(z = -1:1, .after = 1) #> # A tibble: 3 × 3 #> x z y #> <int> <int> <int> #> 1 1 -1 3 #> 2 2 0 2 #> 3 3 1 1 df %>% add_column(w = 0:2, .before = "x") #> # A tibble: 3 × 3 #> w x y #> <int> <int> <int> #> 1 0 1 3 #> 2 1 2 2 #> 3 2 3 1
add_column() function will never alter your existing data: you can’t overwrite existing columns, and you can’t add new observations.
frame_data() is now
tribble(), which stands for “transposed tibble”. The old name still works, but will be deprecated eventually.
tribble( ~x, ~y, 1, "a", 2, "z" ) #> # A tibble: 2 × 2 #> x y #> <dbl> <chr> #> 1 1 a #> 2 2 z
We’ve tweaked the output again to use the multiply character
× instead of
x when printing dimensions (this still renders nicely on Windows.) We surround non-semantic column with backticks, and
dttm is now used instead of
time to distinguish
The example below shows the new rendering:
tibble(`date and time` = Sys.time(), time = hms::hms(minutes = 3)) #> # A tibble: 1 × 2 #> `date and time` time #> <dttm> <time> #> 1 2016-08-29 16:48:57 00:03:00
Expect the printed output to continue to evolve in next release. Stay tuned for a new function that reconstructs
tribble() calls from existing data frames.
I’m pleased to announce version 1.1.0 of stringr. stringr makes string manipulation easier by using consistent function and argument names, and eliminating options that you don’t need 95% of the time. To get started with stringr, check out the strings chapter in R for data science. Install it with:
This release is mostly bug fixes, but there are a couple of new features you might care out.
- There are three new datasets,
sentences, to help you practice your regular expression skills:
str_subset(fruit, "(..)\\1") #>  "banana" "coconut" "cucumber" "jujube" "papaya" #>  "salal berry" head(words) #>  "a" "able" "about" "absolute" "accept" "account" sentences #>  "The birch canoe slid on the smooth planks."
- More functions work with
str_subset()can detect boundaries, and
str_extract_all()pull out the components between boundaries. This is particularly useful if you want to extract logical constructs like words or sentences.
x <- "This is harder than you might expect, e.g. punctuation!" x %>% str_extract_all(boundary("word")) %>% .[] #>  "This" "is" "harder" "than" "you" #>  "might" "expect" "e.g" "punctuation" x %>% str_extract(boundary("sentence")) #>  "This is harder than you might expect, e.g. punctuation!"
str_view_all()create HTML widgets that display regular expression matches. This is particularly useful for teaching.
For a complete list of changes, please see the release notes.
I’m pleased to announce tidyr 0.6.0. tidyr makes it easy to “tidy” your data, storing it in a consistent form so that it’s easy to manipulate, visualise and model. Tidy data has a simple convention: put variables in the columns and observations in the rows. You can learn more about it in the tidy data vignette. Install it with:
I mostly released this version to bundle up a number of small tweaks needed for R for Data Science. But there’s one nice new feature, contributed by Jan Schulz:
drop_na()drops rows containing missing values:
df <- tibble(x = c(1, 2, NA), y = c("a", NA, "b")) df #> # A tibble: 3 × 2 #> x y #> <dbl> <chr> #> 1 1 a #> 2 2 <NA> #> 3 NA b # Called without arguments, it drops rows containing # missing values in any variable: df %>% drop_na() #> # A tibble: 1 × 2 #> x y #> <dbl> <chr> #> 1 1 a # Or you can restrict the variables it looks at, # using select() style syntax: df %>% drop_na(x) #> # A tibble: 2 × 2 #> x y #> <dbl> <chr> #> 1 1 a #> 2 2 <NA>
Please see the release notes for a complete list of changes.
readr 1.0.0 is now available on CRAN. readr makes it easy to read many types of rectangular data, including csv, tsv and fixed width files. Compared to base equivalents like
read.csv(), readr is much faster and gives more convenient output: it never converts strings to factors, can parse date/times, and it doesn’t munge the column names. Install the latest version with:
Releasing a version 1.0.0 was a deliberate choice to reflect the maturity and stability and readr, thanks largely to work by Jim Hester. readr is by no means perfect, but I don’t expect any major changes to the API in the future.
In this version we:
- Use a better strategy for guessing column types.
- Improved the default date and time parsers.
- Provided a full set of lower-level file and line readers and writers.
- Fixed many bugs.
The process by which readr guesses the types of columns has received a substantial overhaul to make it easier to fix problems when the initial guesses aren’t correct, and to make it easier to generate reproducible code. Now column specifications are printing by default when you read from a file:
mtcars2 <- read_csv(readr_example("mtcars.csv")) #> Parsed with column specification: #> cols( #> mpg = col_double(), #> cyl = col_integer(), #> disp = col_double(), #> hp = col_integer(), #> drat = col_double(), #> wt = col_double(), #> qsec = col_double(), #> vs = col_integer(), #> am = col_integer(), #> gear = col_integer(), #> carb = col_integer() #> )
The thought is that once you’ve figured out the correct column types for a file, you should make the parsing strict. You can do this either by copying and pasting the printed column specification or by saving the spec to disk:
# Once you've figured out the correct types mtcars_spec <- write_rds(spec(mtcars2), "mtcars2-spec.rds") # Every subsequent load mtcars2 <- read_csv( readr_example("mtcars.csv"), col_types = read_rds("mtcars2-spec.rds") ) # In production, you might want to throw an error if there # are any parsing problems. stop_for_problems(mtcars2)
You can now also adjust the number of rows that readr uses to guess the column types with
challenge <- read_csv(readr_example("challenge.csv")) #> Parsed with column specification: #> cols( #> x = col_integer(), #> y = col_character() #> ) #> Warning: 1000 parsing failures. #> row col expected actual #> 1001 x no trailing characters .23837975086644292 #> 1002 x no trailing characters .41167997173033655 #> 1003 x no trailing characters .7460716762579978 #> 1004 x no trailing characters .723450553836301 #> 1005 x no trailing characters .614524137461558 #> .... ... ...................... .................. #> See problems(...) for more details. challenge <- read_csv(readr_example("challenge.csv"), guess_max = 1500) #> Parsed with column specification: #> cols( #> x = col_double(), #> y = col_date(format = "") #> )
(If you want to suppress the printed specification, just provide the dummy spec
col_types = cols())
You can now access the guessing algorithm from R:
guess_parser() will tell you which parser readr will select.
guess_parser("1,234") #>  "number" # Were previously guessed as numbers guess_parser(c(".", "-")) #>  "character" guess_parser(c("10W", "20N")) #>  "character" # Now uses the default time format guess_parser("10:30") #>  "time"
Date-time parsing improvements:
The date time parsers recognise three new format strings:
%Ifor 12 hour time format:
library(hms) parse_time("1 pm", "%I %p") #> 13:00:00
hmsfrom the hms package, rather than a custom
%ATare “automatic” date and time parsers. They are both slightly less flexible than previous defaults. The automatic date parser requires a four digit year, and only accepts
/as separators. The flexible time parser now requires colons between hours and minutes and optional seconds.
parse_date("2010-01-01", "%AD") #>  "2010-01-01" parse_time("15:01", "%AT") #> 15:01:00
If the format argument is omitted in
parse_time(), the default date and time formats specified in the locale will be used. These now default to
%AT respectively. You may want to override in your standard
locale() if the conventions are different where you live.
Low-level readers and writers
readr now contains a full set of efficient lower-level readers:
read_file()reads a file into a length-1 character vector;
read_file_raw()reads a file into a single raw vector.
read_lines()reads a file into a character vector with one entry per line;
read_lines_raw()reads into a list of raw vectors with one entry per line.
These are paired with
write_file() to efficient write character and raw vectors back to disk.
read_fwf()was overhauled to reliably read only a partial set of columns, to read files with ragged final columns (by setting the final position/width to
NA), and to skip comments (with the
- readr contains an experimental API for reading a file in chunks, e.g.
read_lines_chunked(). These allow you to work with files that are bigger than memory. We haven’t yet finalised the API so please use with care, and send us your feedback.
- There are many otherbug fixes and other minor improvements. You can see a complete list in the release notes.