You are currently browsing the category archive for the ‘Packages’ category.
rstudio::conf 2017, the conference on all things R and RStudio, is only 90 days away. Now is the time to claim your spot or grab one of the few remaining seats at Training Days – including the new Tidyverse workshop.
Whether you’re already registered or still working on it, we’re delighted today to announce the full conference schedule, so that you can plan your days in Florida.
rstudio::conf 2017 takes place January 12-14 at the Gaylord Resorts in Kissimmee, Florida. There are over 30 talks and tutorials to choose from that are sure to accelerate your productivity in R and RStudio. In addition to the highlights below, topics include the latest news on R notebooks, sparklyr, profiling, the tidyverse, shiny, r markdown, html widgets, data access and the new enterprise-scale publishing capabilities of RStudio Connect.
– Hadley Wickham, Chief Scientist, RStudio: Data Science in the Tidyverse
– Andrew Flowers, Economics Writer, FiveThirtyEight: Finding and Telling Stories with R
– J.J. Allaire, Software Engineer, CEO & Founder: RStudio Past, Present and Future
– Winston Chang, Software Engineer, RStudio: Building Dashboards with Shiny
– Charlotte Wickham, Oregon State University: Happy R Users Purrr
– Yihui Xie, Software Engineer, RStudio: Advanced R Markdown
– Jenny Bryan, University of British Columbia: Happy Git and GitHub for the UseR
– Max Kuhn, Senior Director Non-Clinical Statistics, Pfizer
– Dirk Eddelbuettel, Ketchum Trading: Extending R with C++: A Brief Introduction to Rcpp
– Hilary Parker, Stitch Fix: Opinionated Analysis Development“
– Bryan Lewis, Paradigm4: “Fun with htmlwidgets”
– Ryan Hafen, Hafen Consulting: “Interactive plotting with rbokeh and crosstalk”
– Julia Silge, Datassist: “Text mining, the tidy way”
– Bob Rudis, Rapid7: “Writing readable code with pipes”
– Joseph Rickert, R Ambassador, RStudio: R’s Role in Data Science
Special Reminder: When you register, make sure you purchase your ticket for Friday evening at Universal’s Wizarding World of Harry Potter. The park is reserved exclusively for rstudio::conf attendees. It’s an extraordinary experience we’re sure you’ll enjoy!
We appreciate our sponsors and exhibitors!
If there’s one word that could describe the default styling of Shiny applications, it might be “minimalist.” Shiny’s UI components are built using the Bootstrap web framework, and unless the appearance is customized, the application will be mostly white and light gray.
Fortunately, it’s easy to add a bit of flavor to your Shiny application, with the shinythemes package. We’ve just released version 1.1.1 of shinythemes, which includes many new themes from bootswatch.com, as well as a theme selector which you can use to test out different themes on a live Shiny application.
Here’s an example of the theme selector in use (try out the app here):
To install the latest version of shinythemes, run:
To use the theme selector, all you need to do is add this somewhere in your app’s UI code:
Once you’ve chosen which theme you want, all you need to do is use the
theme argument of the
fixedPage functions. If you want to use “cerulean”, you would do this:
fluidPage(theme = shinytheme("cerulean"), ... )
To learn more and see screenshots of the different themes, visit the shinythemes web page. Enjoy!
I’m pleased to announce the release of haven. Haven is designed to faciliate the transfer of data between R and SAS, SPSS, and Stata. It makes it easy to read SAS, SPSS, and Stata file formats in to R data frames, and makes it easy to save your R data frames in to SAS, SPSS, and Stata if you need to collaborate with others using closed source statistical software. Install haven by running:
haven 1.0.0 is a major release, and indicates that haven is now largely feature complete and has been tested on many real world datasets. There are four major changes in this version of haven:
- Improvements to the underlying ReadStat library
- Better handling of “special” missing values
- Improved date/time support
- Support for other file metadata.
There were also a whole bunch of other minor improvements and bug fixes: you can see the complete list in the release notes.
- Can read binary/Ross compressed SAS files.
- Support for reading and writing Stata 14 data files.
write_sas()allows you to write data frames out to
sas7bdatfiles. This is still somewhat experimental.
read_por()now actually works.
- Many other bug fixes and minor improvements.
haven 1.0.0 includes comprehensive support for the “special” types of missing values found in SAS, SPSS, and Stata. All three tools provide a global “system missing value”, displayed as
.. This is roughly equivalent to R’s
NA, although neither Stata nor SAS propagate missingness in numeric comparisons (SAS treats the missing value as the smallest possible number and Stata treats it as the largest possible number).
Each tool also provides a mechanism for recording multiple types of missingness:
- Stata has “extended” missing values,
- SAS has “special” missing values,
- SPSS has per-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.
Stata and SAS only support tagged missing values for numeric columns. SPSS supports up to three distinct values for character columns. Generally, operations involving a user-missing type return a system missing value.
Haven models these missing values in two different ways:
- For SAS and Stata, haven provides
tagged_na()which extend R’s regular
NAto add a single character label.
- For SPSS, haven provides
labelled_spss()that also models user defined values and ranges.
zap_missing() if you just want to convert to R’s regular
You can get more details in the semantics vignette.
Support for date/times has substantially improved:
read_dta()now recognises “%d” and custom date types.
read_sav()now correctly recognises EDATE and JDATE formats as dates. Variables with format DATE, ADATE, EDATE, JDATE or SDATE are imported as
Datevariables instead of
write_sav()support writing date/times.
- Support for
hms()has been moved into the hms package. Time varibles now have class
c("hms", "difftime")and a
unitsattribute with value “secs”.
Haven is slowly adding support for other types of metadata:
- Variable formats can be read and written. Similarly to to variable labels, formats are stored as an attribute on the vector. Use
zap_formats()if you want to remove these attributes.
- Added support for reading file “label” and “notes”. These are not currently printed, but are stored in the attributes if you need to access them.
Over the past couple of years we’ve heard time and time again that people want a native dplyr interface to Spark, so we built one! sparklyr also provides interfaces to Spark’s distributed machine learning algorithms and much more. Highlights include:
- Interactively manipulate Spark data using both dplyr and SQL (via DBI).
- Filter and aggregate Spark datasets then bring them into R for analysis and visualization.
- Orchestrate distributed machine learning from R using either Spark MLlib or H2O SparkingWater.
- Create extensions that call the full Spark API and provide interfaces to Spark packages.
- Integrated support for establishing Spark connections and browsing Spark data frames within the RStudio IDE.
We’re also excited to be working with several industry partners. IBM is incorporating sparklyr into their Data Science Experience, Cloudera is working with us to ensure that sparklyr meets the requirements of their enterprise customers, and H2O has provided an integration between sparklyr and H2O Sparkling Water.
You can install sparklyr from CRAN as follows:
You should also install a local version of Spark for development purposes:
library(sparklyr) spark_install(version = "1.6.2")
If you use the RStudio IDE, you should also download the latest preview release of the IDE which includes several enhancements for interacting with Spark.
Extensive documentation and examples are available at http://spark.rstudio.com.
Connecting to Spark
You can connect to both local instances of Spark as well as remote Spark clusters. Here we’ll connect to a local instance of Spark:
library(sparklyr) sc <- spark_connect(master = "local")
The returned Spark connection (
sc) provides a remote dplyr data source to the Spark cluster.
You can copy R data frames into Spark using the dplyr copy_to function (more typically though you’ll read data within the Spark cluster using the spark_read family of functions). For the examples below we’ll copy some datasets from R into Spark (note that you may need to install the nycflights13 and Lahman packages in order to execute this code):
library(dplyr) iris_tbl <- copy_to(sc, iris) flights_tbl <- copy_to(sc, nycflights13::flights, "flights") batting_tbl <- copy_to(sc, Lahman::Batting, "batting")
We can now use all of the available dplyr verbs against the tables within the cluster. Here’s a simple filtering example:
# filter by departure delay flights_tbl %>% filter(dep_delay == 2)
Introduction to dplyr provides additional dplyr examples you can try. For example, consider the last example from the tutorial which plots data on flight delays:
delay <- flights_tbl %>% group_by(tailnum) %>% summarise(count = n(), dist = mean(distance), delay = mean(arr_delay)) %>% filter(count > 20, dist < 2000, !is.na(delay)) %>% collect() # plot delays library(ggplot2) ggplot(delay, aes(dist, delay)) + geom_point(aes(size = count), alpha = 1/2) + geom_smooth() + scale_size_area(max_size = 2)
dplyr window functions are also supported, for example:
batting_tbl %>% select(playerID, yearID, teamID, G, AB:H) %>% arrange(playerID, yearID, teamID) %>% group_by(playerID) %>% filter(min_rank(desc(H)) <= 2 & H > 0)
For additional documentation on using dplyr with Spark see the dplyr section of the sparklyr website.
It’s also possible to execute SQL queries directly against tables within a Spark cluster. The
spark_connection object implements a DBI interface for Spark, so you can use
dbGetQuery to execute SQL and return the result as an R data frame:
library(DBI) iris_preview <- dbGetQuery(sc, "SELECT * FROM iris LIMIT 10")
You can orchestrate machine learning algorithms in a Spark cluster via either Spark MLlib or via the H2O Sparkling Water extension package. Both provide a set of high-level APIs built on top of DataFrames that help you create and tune machine learning workflows.
In this example we’ll use ml_linear_regression to fit a linear regression model. We’ll use the built-in
mtcars dataset, and see if we can predict a car’s fuel consumption (
mpg) based on its weight (
wt) and the number of cylinders the engine contains (
cyl). We’ll assume in each case that the relationship between
mpg and each of our features is linear.
# copy mtcars into spark mtcars_tbl <- copy_to(sc, mtcars) # transform our data set, and then partition into 'training', 'test' partitions <- mtcars_tbl %>% filter(hp >= 100) %>% mutate(cyl8 = cyl == 8) %>% sdf_partition(training = 0.5, test = 0.5, seed = 1099) # fit a linear model to the training dataset fit <- partitions$training %>% ml_linear_regression(response = "mpg", features = c("wt", "cyl"))
For linear regression models produced by Spark, we can use
summary() to learn a bit more about the quality of our fit, and the statistical significance of each of our predictors.
Spark machine learning supports a wide array of algorithms and feature transformations, and as illustrated above it’s easy to chain these functions together with dplyr pipelines. To learn more see the Spark MLlib section of the sparklyr website.
H2O Sparkling Water
Let’s walk the same
mtcars example, but in this case use H2O’s machine learning algorithms via the H2O Sparkling Water extension. The dplyr code used to prepare the data is the same, but after partitioning into test and training data we call
h2o.glm rather than
# convert to h20_frame (uses the same underlying rdd) training <- as_h2o_frame(partitions$training) test <- as_h2o_frame(partitions$test) # fit a linear model to the training dataset fit <- h2o.glm(x = c("wt", "cyl"), y = "mpg", training_frame = training, lamda_search = TRUE) # inspect the model print(fit)
For linear regression models produced by H2O, we can use either
summary() to learn a bit more about the quality of our fit. The
summary() method returns some extra information about scoring history and variable importance.
To learn more see the H2O Sparkling Water section of the sparklyr website.
The facilities used internally by sparklyr for its dplyr and machine learning interfaces are available to extension packages. Since Spark is a general purpose cluster computing system there are many potential applications for extensions (e.g. interfaces to custom machine learning pipelines, interfaces to 3rd party Spark packages, etc.).
We’re excited to see what other sparklyr extensions the R community creates. To learn more see the Extensions section of the sparklyr website.
The latest RStudio Preview Release of the RStudio IDE includes integrated support for Spark and the sparklyr package, including tools for:
- Creating and managing Spark connections
- Browsing the tables and columns of Spark DataFrames
- Previewing the first 1,000 rows of Spark DataFrames
Once you’ve installed the sparklyr package, you should find a new Spark pane within the IDE. This pane includes a New Connection dialog which can be used to make connections to local or remote Spark instances:
Once you’ve connected to Spark you’ll be able to browse the tables contained within the Spark cluster:
The Spark DataFrame preview uses the standard RStudio data viewer:
The RStudio IDE features for sparklyr are available now as part of the RStudio Preview Release. The final version of RStudio IDE that includes integrated support for sparklyr will ship within the next few weeks.
We’re very pleased to be joined in this announcement by IBM, Cloudera, and H2O, who are working with us to ensure that sparklyr meets the requirements of enterprise customers and is easy to integrate with current and future deployments of Spark.
“With our latest contributions to Apache Spark and the release of sparklyr, we continue to emphasize R as a primary data science language within the Spark community. Additionally, we are making plans to include sparklyr in Data Science Experience to provide the tools data scientists are comfortable with to help them bring business-changing insights to their companies faster,” said Ritika Gunnar, vice president of Offering Management, IBM Analytics.
“At Cloudera, data science is one of the most popular use cases we see for Apache Spark as a core part of the Apache Hadoop ecosystem, yet the lack of a compelling R experience has limited data scientists’ access to available data and compute,” said Charles Zedlewski, vice president, Products at Cloudera. “We are excited to partner with RStudio to help bring sparklyr to the enterprise, so that data scientists and IT teams alike can get more value from their existing skills and infrastructure, all with the security, governance, and management our customers expect.”
“At H2O.ai, we’ve been focused on bringing the best of breed open source machine learning to data scientists working in R & Python. However, the lack of robust tooling in the R ecosystem for interfacing with Apache Spark has made it difficult for the R community to take advantage of the distributed data processing capabilities of Apache Spark.
We’re excited to work with RStudio to bring the ease of use of dplyr and the distributed machine learning algorithms from H2O’s Sparkling Water to the R community via the sparklyr & rsparkling packages”
The tidyverse is a set of packages that work in harmony because they share common data representations and API design. The tidyverse package is designed to make it easy to install and load core packages from the tidyverse in a single command.
The best place to learn about all the packages in the tidyverse and how they fit together is R for Data Science. Expect to hear more about the tidyverse in the coming months as I work on improved package websites, making citation easier, and providing a common home for discussions about data analysis with the tidyverse.
You can install tidyverse with
This will install the core tidyverse packages that you are likely to use in almost every analysis:
- ggplot2, for data visualisation.
- dplyr, for data manipulation.
- tidyr, for data tidying.
- readr, for data import.
- purrr, for functional programming.
- tibble, for tibbles, a modern re-imagining of data frames.
It also installs a selection of other tidyverse packages that you’re likely to use frequently, but probably not in every analysis. This includes packages for data manipulation:
- DBI, for databases.
- haven, for SPSS, SAS and Stata files.
- httr, for web apis.
- jsonlite for JSON.
- readxl, for
- rvest, for web scraping.
- xml2, for XML.
These packages will be installed along with tidyverse, but you’ll load them explicitly with
library(tidyverse) will load the core tidyverse packages: ggplot2, tibble, tidyr, readr, purrr, and dplyr. You also get a condensed summary of conflicts with other packages you have loaded:
library(tidyverse) #> Loading tidyverse: ggplot2 #> Loading tidyverse: tibble #> Loading tidyverse: tidyr #> Loading tidyverse: readr #> Loading tidyverse: purrr #> Loading tidyverse: dplyr #> Conflicts with tidy packages --------------------------------------- #> filter(): dplyr, stats #> lag(): dplyr, stats
You can see conflicts created later with
library(MASS) #> #> Attaching package: 'MASS' #> The following object is masked from 'package:dplyr': #> #> select tidyverse_conflicts() #> Conflicts with tidy packages -------------------------------------- #> filter(): dplyr, stats #> lag(): dplyr, stats #> select(): dplyr, MASS
And you can check that all tidyverse packages are up-to-date with
tidyverse_update() #> The following packages are out of date: #> * broom (0.4.0 -> 0.4.1) #> * DBI (0.4.1 -> 0.5) #> * Rcpp (0.12.6 -> 0.12.7) #> Update now? #> #> 1: Yes #> 2: No
I am pleased to announced lubridate 1.6.0. Lubridate is designed to make working with dates and times as pleasant as possible, and is maintained by Vitalie Spinu. You can install the latest version with:
This release includes a range of bug fixes and minor improvements. Some highlights from this release include:
duration()constructors now accept character strings and allow a very flexible specification of timespans:
period("3H 2M 1S") #>  "3H 2M 1S" duration("3 hours, 2 mins, 1 secs") #>  "10921s (~3.03 hours)" # Missing numerals default to 1. # Repeated units are summed period("hour minute minute") #>  "1H 2M 0S"
Period and duration parsing allows for arbitrary abbreviations of time units as long as the specification is unambiguous. For single letter specs,
strptime()rules are followed, so
These same rules allows you to compare strings and durations/periods:
"2mins 1 sec" > period("2mins") #>  TRUE
- Date time rounding (with
ceiling_date()) now supports unit multipliers, like “3 days” or “2 months”:
ceiling_date(ymd_hms("2016-09-12 17:10:00"), unit = "5 minutes") #>  "2016-09-12 17:10:00 UTC"
- The behavior of
Dateobjects is now more intuitive. In short, dates are now interpreted as time intervals that are physically part of longer unit intervals:
|day1| ... |day31|day1| ... |day28| ... | January | February | ...
That means that rounding up
2000-01-01by a month is done to the boundary between January and February which, i.e.
ceiling_date(ymd("2000-01-01"), unit = "month") #>  "2000-02-01"
This behavior is controlled by the
- It is now possible to compare
ymd_hms("2000-01-01 00:00:01") > ymd("2000-01-01") #>  TRUE
- C-level parsing now handles English months and AM/PM indicator regardless of your locale. This means that English date-times are now always handled by lubridate C-level parsing and you don’t need to explicitly switch the locale.
- New parsing function
yq()allows you to parse a year + quarter:
yq("2016-02") #>  "2016-04-01"
qformat is available in all lubridate parsing functions.
I’m excited to announce forcats, a new package for categorical variables, or factors. Factors have a bad rap in R because they often turn up when you don’t want them. That’s because historically, factors were more convenient than character vectors, as discussed in stringsAsFactors: An unauthorized biography by Roger Peng, and stringsAsFactors = <sigh> by Thomas Lumley.
If you use packages from the tidyverse (like tibble and readr) you don’t need to worry about getting factors when you don’t want them. But factors are a useful data structure in their own right, particularly for modelling and visualisation, because they allow you to control the order of the levels. Working with factors in base R can be a little frustrating because of a handful of missing tools. The goal of forcats is to fill in those missing pieces so you can access the power of factors with a minimum of pain.
Install forcats with:
forcats provides two main types of tools to change either the values or the order of the levels. I’ll call out some of the most important functions below, using using the included
gss_cat dataset which contains a selection of categorical variables from the General Social Survey.
library(dplyr) library(ggplot2) library(forcats) gss_cat #> # A tibble: 21,483 × 9 #> year marital age race rincome partyid #> <int> <fctr> <int> <fctr> <fctr> <fctr> #> 1 2000 Never married 26 White $8000 to 9999 Ind,near rep #> 2 2000 Divorced 48 White $8000 to 9999 Not str republican #> 3 2000 Widowed 67 White Not applicable Independent #> 4 2000 Never married 39 White Not applicable Ind,near rep #> 5 2000 Divorced 25 White Not applicable Not str democrat #> 6 2000 Married 25 White $20000 - 24999 Strong democrat #> # ... with 2.148e+04 more rows, and 3 more variables: relig <fctr>, #> # denom <fctr>, tvhours <int>
Change level values
You can recode specified factor levels with
gss_cat %>% count(partyid) #> # A tibble: 10 × 2 #> partyid n #> <fctr> <int> #> 1 No answer 154 #> 2 Don't know 1 #> 3 Other party 393 #> 4 Strong republican 2314 #> 5 Not str republican 3032 #> 6 Ind,near rep 1791 #> # ... with 4 more rows gss_cat %>% mutate(partyid = fct_recode(partyid, "Republican, strong" = "Strong republican", "Republican, weak" = "Not str republican", "Independent, near rep" = "Ind,near rep", "Independent, near dem" = "Ind,near dem", "Democrat, weak" = "Not str democrat", "Democrat, strong" = "Strong democrat" )) %>% count(partyid) #> # A tibble: 10 × 2 #> partyid n #> <fctr> <int> #> 1 No answer 154 #> 2 Don't know 1 #> 3 Other party 393 #> 4 Republican, strong 2314 #> 5 Republican, weak 3032 #> 6 Independent, near rep 1791 #> # ... with 4 more rows
Note that unmentioned levels are left as is, and the order of the levels is preserved.
fct_lump() allows you to lump the rarest (or most common) levels in to a new “other” level. The default behaviour is to collapse the smallest levels in to other, ensuring that it’s still the smallest level. For the religion variable that tells us that Protestants out number all other religions, which is interesting, but we probably want more level.
gss_cat %>% mutate(relig = fct_lump(relig)) %>% count(relig) #> # A tibble: 2 × 2 #> relig n #> <fctr> <int> #> 1 Other 10637 #> 2 Protestant 10846
Alternatively you can supply a number of levels to keep,
n, or minimum proportion for inclusion,
prop. If you use negative values,
fct_lump()will change direction, and combine the most common values while preserving the rarest.
gss_cat %>% mutate(relig = fct_lump(relig, n = 5)) %>% count(relig) #> # A tibble: 6 × 2 #> relig n #> <fctr> <int> #> 1 Other 913 #> 2 Christian 689 #> 3 None 3523 #> 4 Jewish 388 #> 5 Catholic 5124 #> 6 Protestant 10846 gss_cat %>% mutate(relig = fct_lump(relig, prop = -0.10)) %>% count(relig) #> # A tibble: 12 × 2 #> relig n #> <fctr> <int> #> 1 No answer 93 #> 2 Don't know 15 #> 3 Inter-nondenominational 109 #> 4 Native american 23 #> 5 Christian 689 #> 6 Orthodox-christian 95 #> # ... with 6 more rows
Change level order
There are four simple helpers for common operations:
fct_relevel()is similar to
stats::relevel()but allows you to move any number of levels to the front.
fct_inorder()orders according to the first appearance of each level.
fct_infreq()orders from most common to rarest.
fct_rev()reverses the order of levels.
fct_reorder2() are useful for visualisations.
fct_reorder() reorders the factor levels by another variable. This is useful when you map a categorical variable to position, as shown in the following example which shows the average number of hours spent watching television across religions.
relig <- gss_cat %>% group_by(relig) %>% summarise( age = mean(age, na.rm = TRUE), tvhours = mean(tvhours, na.rm = TRUE), n = n() ) ggplot(relig, aes(tvhours, relig)) + geom_point() ggplot(relig, aes(tvhours, fct_reorder(relig, tvhours))) + geom_point()
fct_reorder2() extends the same idea to plots where a factor is mapped to another aesthetic, like colour. The defaults are designed to make legends easier to read for line plots, as shown in the following example looking at marital status by age.
We’re proud to announce version 1.2.0 of the tibble package. Tibbles are a modern reimagining of the data frame, keeping what time has shown to be effective, and throwing out what is not. Grab the latest version with:
This is mostly a maintenance release, with the following major changes:
- More options for adding individual rows and (new!) columns
- Improved function names
- Minor tweaks to the output
There are many other small improvements and bug fixes: please see the release notes for a complete list.
Thanks to Jenny Bryan for
add_column() improvements and ideas, to William Dunlap for pointing out a bug with tibble’s implementation of
all.equal(), to Kevin Wright for pointing out a rare bug with
glimpse(), and to all the other contributors. Use the issue tracker to submit bugs or suggest ideas, your contributions are always welcome.
Adding rows and columns
There are now more options for adding individual rows, and columns can be added in a similar way, illustrated with this small tibble:
df <- tibble(x = 1:3, y = 3:1) df #> # A tibble: 3 × 2 #> x y #> <int> <int> #> 1 1 3 #> 2 2 2 #> 3 3 1
add_row() function allows control over where the new rows are added. In the following example, the row (4, 0) is added before the second row:
df %>% add_row(x = 4, y = 0, .before = 2) #> # A tibble: 4 × 2 #> x y #> <dbl> <dbl> #> 1 1 3 #> 2 4 0 #> 3 2 2 #> 4 3 1
Adding more than one row is now fully supported, although not recommended in general because it can be a bit hard to read.
df %>% add_row(x = 4:5, y = 0:-1) #> # A tibble: 5 × 2 #> x y #> <int> <int> #> 1 1 3 #> 2 2 2 #> 3 3 1 #> 4 4 0 #> 5 5 -1
Columns can now be added in much the same way with the new
df %>% add_column(z = -1:1, w = 0) #> # A tibble: 3 × 4 #> x y z w #> <int> <int> <int> <dbl> #> 1 1 3 -1 0 #> 2 2 2 0 0 #> 3 3 1 1 0
It also supports
df %>% add_column(z = -1:1, .after = 1) #> # A tibble: 3 × 3 #> x z y #> <int> <int> <int> #> 1 1 -1 3 #> 2 2 0 2 #> 3 3 1 1 df %>% add_column(w = 0:2, .before = "x") #> # A tibble: 3 × 3 #> w x y #> <int> <int> <int> #> 1 0 1 3 #> 2 1 2 2 #> 3 2 3 1
add_column() function will never alter your existing data: you can’t overwrite existing columns, and you can’t add new observations.
frame_data() is now
tribble(), which stands for “transposed tibble”. The old name still works, but will be deprecated eventually.
tribble( ~x, ~y, 1, "a", 2, "z" ) #> # A tibble: 2 × 2 #> x y #> <dbl> <chr> #> 1 1 a #> 2 2 z
We’ve tweaked the output again to use the multiply character
× instead of
x when printing dimensions (this still renders nicely on Windows.) We surround non-semantic column with backticks, and
dttm is now used instead of
time to distinguish
The example below shows the new rendering:
tibble(`date and time` = Sys.time(), time = hms::hms(minutes = 3)) #> # A tibble: 1 × 2 #> `date and time` time #> <dttm> <time> #> 1 2016-08-29 16:48:57 00:03:00
Expect the printed output to continue to evolve in next release. Stay tuned for a new function that reconstructs
tribble() calls from existing data frames.
I’m pleased to announce version 1.1.0 of stringr. stringr makes string manipulation easier by using consistent function and argument names, and eliminating options that you don’t need 95% of the time. To get started with stringr, check out the strings chapter in R for data science. Install it with:
This release is mostly bug fixes, but there are a couple of new features you might care out.
- There are three new datasets,
sentences, to help you practice your regular expression skills:
str_subset(fruit, "(..)\\1") #>  "banana" "coconut" "cucumber" "jujube" "papaya" #>  "salal berry" head(words) #>  "a" "able" "about" "absolute" "accept" "account" sentences #>  "The birch canoe slid on the smooth planks."
- More functions work with
str_subset()can detect boundaries, and
str_extract_all()pull out the components between boundaries. This is particularly useful if you want to extract logical constructs like words or sentences.
x <- "This is harder than you might expect, e.g. punctuation!" x %>% str_extract_all(boundary("word")) %>% .[] #>  "This" "is" "harder" "than" "you" #>  "might" "expect" "e.g" "punctuation" x %>% str_extract(boundary("sentence")) #>  "This is harder than you might expect, e.g. punctuation!"
str_view_all()create HTML widgets that display regular expression matches. This is particularly useful for teaching.
For a complete list of changes, please see the release notes.
Want to Master R? There’s no better time or place if you’re within an easy train, plane, automobile ride or a short jog of Hadley Wickham’s workshop on September 12th and 13th at the AMA Conference Center in New York City.
As of today, there are just 20+ seats left. Discounts are still available for academics (students or faculty) and for 5 or more attendees from any organization. Email firstname.lastname@example.org if you have any questions about the workshop that you don’t find answered on the registration page.
Hadley has no Master R workshops planned for Boston, Washington DC, New York City or any location in the Northeast in the next year. If you’ve always wanted to take Master R but haven’t found the time, well, there’s truly no better time!
P.S. We’ve arranged a “happy hour” reception after class on Monday the 12th. Be sure to set aside an hour or so after the first day to talk to your classmates and Hadley about what’s happening in R.