Today we’re excited to announce htmlwidgets, a new framework that brings the best of JavaScript data visualization libraries to R. There are already several packages that take advantage of the framework (leaflet, dygraphs, networkD3, DataTables, and rthreejs) with hopefully many more to come.

An htmlwidget works just like an R plot except it produces an interactive web visualization. A line or two of R code is all it takes to produce a D3 graphic or Leaflet map. Widgets can be used at the R console as well as embedded in R Markdown reports and Shiny web applications. Here’s an example of using leaflet directly from the R console:


When printed at the console the leaflet widget displays in the RStudio Viewer pane. All of the tools typically available for plots are also available for widgets, including history, zooming, and export to file/clipboard (note that when not running within RStudio widgets will display in an external web browser).

Here’s the same widget in an R Markdown report. Widgets automatically print as HTML within R Markdown documents and even respect the default knitr figure width and height.


Widgets also provide Shiny output bindings so can be easily used within web applications. Here’s the same widget in a Shiny application:


Bringing JavaScript to R

The htmlwidgets framework is a collaboration between Ramnath Vaidyanathan (rCharts), Kenton Russell (Timely Portfolio), and RStudio. We’ve all spent countless hours creating bindings between R and the web and were motivated to create a framework that made this as easy as possible for all R developers.

There are a plethora of libraries available that create attractive and fully interactive data visualizations for the web. However, the programming interface to these libraries is JavaScript, which places them outside the reach of nearly all statisticians and analysts. htmlwidgets makes it extremely straightforward to create an R interface for any JavaScript library.

Here are a few widget libraries that have been built so far:

  • leaflet, a library for creating dynamic maps that support panning and zooming, with various annotations like markers, polygons, and popups.
  • dygraphs, which provides rich facilities for charting time-series data and includes support for many interactive features including series/point highlighting, zooming, and panning.
  • networkD3, a library for creating D3 network graphs including force directed networks, Sankey diagrams, and Reingold-Tilford tree networks.
  • DataTables, which displays R matrices or data frames as interactive HTML tables that support filtering, pagination, and sorting.
  • rthreejs, which features 3D scatterplots and globes based on WebGL.

All of these libraries combine visualization with direct interactivity, enabling users to explore data dynamically. For example, time-series visualizations created with dygraphs allow dynamic panning and zooming:


Learning More

To learn more about the framework and see a showcase of the available widgets in action check out the htmlwidgets web site. To learn more about building your own widgets, install the htmlwidgets package from CRAN and check out the developer documentation.


httr 0.6.0 is now available on CRAN. The httr packages makes it easy to talk to web APIs from R. Learn more in the quick start vignette.

This release is mostly bug fixes and minor improvements. The most important are:

  • handle_reset(), which allows you to reset the default handle if you get the error “easy handle already used in multi handle”.
  • write_stream() which lets you process the response from a server as a stream of raw vectors (#143).
  • VERB() allows to you send a request with a custom http verb.
  • brew_dr() checks for common problems. It currently checks if your libcurl uses NSS. This is unlikely to work so it gives you some advice on how to fix the problem (thanks to Dirk Eddelbuettel for debugging this problem and suggesting a remedy).
  • Added support for Google OAuth2 service accounts. (#119, thanks to help from @siddharthab). See ?oauth_service_token for details.

I’ve also switched from RC to R6 (which should make it easier to extend OAuth classes for non-standard OAuth implementations), and tweaked the use of the backend SSL certificate details bundled with httr. See the release notes for complete details.

tidyr 0.2.0 is now available on CRAN. tidyr makes it easy to “tidy” your data, storing it in a consistent form so that it’s easy to manipulate, visualise and model. Tidy data has variables in columns and observations in rows, and is described in more detail in the tidy data vignette. Install tidyr with:


There are three important additions to tidyr 0.2.0:

  • expand() is a wrapper around expand.grid() that allows you to generate all possible combinations of two or more variables. In conjunction with dplyr::left_join(), this makes it easy to fill in missing rows of data.
    sales <- dplyr::data_frame(
      year = rep(c(2012, 2013), c(4, 2)),
      quarter = c(1, 2, 3, 4, 2, 3), 
      sales = sample(6) * 100
    # Missing sales data for 2013 Q1 & Q4
    #> Source: local data frame [6 x 3]
    #>   year quarter sales
    #> 1 2012       1   400
    #> 2 2012       2   200
    #> 3 2012       3   500
    #> 4 2012       4   600
    #> 5 2013       2   300
    #> 6 2013       3   100
    # Missing values are now explicit
    sales %>% 
      expand(year, quarter) %>%
    #> Joining by: c("year", "quarter")
    #> Source: local data frame [8 x 3]
    #>   year quarter sales
    #> 1 2012       1   400
    #> 2 2012       2   200
    #> 3 2012       3   500
    #> 4 2012       4   600
    #> 5 2013       1    NA
    #> 6 2013       2   300
    #> 7 2013       3   100
    #> 8 2013       4    NA
  • In the process of data tidying, it’s sometimes useful to have a column of a data frame that is a list of vectors. unnest() lets you simplify that column back down to an atomic vector, duplicating the original rows as needed. (NB: If you’re working with data frames containing lists, I highly recommend using dplyr’s tbl_df, which will display list-columns in a way that makes their structure more clear. Use dplyr::data_frame() to create a data frame wrapped with the tbl_df class.)
    raw <- dplyr::data_frame(
      x = 1:3,
      y = c("a", "d,e,f", "g,h")
    # y is character vector containing comma separated strings
    #> Source: local data frame [3 x 2]
    #>   x     y
    #> 1 1     a
    #> 2 2 d,e,f
    #> 3 3   g,h
    # y is a list of character vectors
    as_list <- raw %>% mutate(y = strsplit(y, ","))
    #> Source: local data frame [3 x 2]
    #>   x        y
    #> 1 1 <chr[1]>
    #> 2 2 <chr[3]>
    #> 3 3 <chr[2]>
    # y is a character vector; rows are duplicated as needed
    as_list %>% unnest(y)
    #> Source: local data frame [6 x 2]
    #>   x y
    #> 1 1 a
    #> 2 2 d
    #> 3 2 e
    #> 4 2 f
    #> 5 3 g
    #> 6 3 h
  • separate() has a new extra argument that allows you to control what happens if a column doesn’t always split into the same number of pieces.
    raw %>% separate(y, c("trt", "B"), ",")
    #> Error: Values not split into 2 pieces at 1, 2
    raw %>% separate(y, c("trt", "B"), ",", extra = "drop")
    #> Source: local data frame [3 x 3]
    #>   x trt  B
    #> 1 1   a NA
    #> 2 2   d  e
    #> 3 3   g  h
    raw %>% separate(y, c("trt", "B"), ",", extra = "merge")
    #> Source: local data frame [3 x 3]
    #>   x trt   B
    #> 1 1   a  NA
    #> 2 2   d e,f
    #> 3 3   g   h

To read about the other minor changes and bug fixes, please consult the release notes.

reshape2 1.4.1

There’s also a new version of reshape2, 1.4.1. It includes three bug fixes for contributed by Kevin Ushey. Read all about them on the release notes and install it with:


We’ve teamed up with DataCamp to make a self-paced online course that teaches ggvis, the newest data visualization package by Hadley Wickham and Winston Chang. The ggvis course pairs challenging exercises, interactive feedback, and “to the point” videos to let you learn ggvis in a guided way.

In the course, you will learn how to make and customize graphics with ggvis. You’ll learn the commands and syntax that ggvis uses to build graphics, and you’ll learn the theory that underlies ggvis. ggvis implements the grammar of graphics, a logical method for building graphs that is easy to use and to extend. Finally, since this is ggvis, you’ll learn to make interactive graphics with sliders and other user controls.

The first part of the tutorial is available for free, so you can start learning immediately.

(Posted on behalf of Stefan Milton Bache)

Sometimes it’s the small things that make a big difference. For me, the introduction of our awkward looking friend, %>%, was one such little thing. I’d never suspected that it would have such an impact on the way quite a few people think and write R (including my own), or that pies would be baked (see here) and t-shirts printed (e.g. here) in honor of the successful three-char-long and slightly overweight operator. Of course a big part of the success is the very fruitful relationship with dplyr and its powerful verbs.

Quite some time went by without any changes to the CRAN version of magrittr. But many ideas have been evaluated and tested, and now we are happy to finally bring an update which brings both some optimization and a few nifty features — we hope that we have managed to strike a balance between simplicity and usefulness and that you will benefit from this update. You can install it now with:


The underlying evaluation model is more coherent in this release; this makes the new features more natural extensions and improves performance somewhat. Below I’ll recap some of the important new features, which include functional sequences, a few specialized supplementary operators and better lambda syntax.

Functional sequences

The basic (pseudo) usage of the pipe operator goes something like this:

awesome_data <-
  raw_interesting_data %>%
  transform(somehow) %>%
  filter(the_good_parts) %>%

This statement has three parts: an input, an output, and a sequence transformations. That’s suprisingly close to the definition of a function, so in magrittr is really just a convenient way of of defining and applying a function.
A new really useful feature of magrittr 1.5 makes that explicit: you can use %>% to not only produce values but also to produce functions (or functional sequences)! It’s really all the same, except sometimes the function is applied instantly and produces a result, and sometimes it is not, in which case the function itself is returned. In this case, there is no initial value, so we replace that with the dot placeholder. Here is how:

mae <- . %>% abs %>% mean(na.rm = TRUE)
#> [1] 0.5605

That’s equivalent to:

mae <- function(x) {
  mean(abs(x), na.rm = TRUE)

Even for a short function, this is more compact, and is easier to read as it is defined linearly from left to right.
There are some really cool use cases for this: functionals! Consider how clean it is to pass a function to lapply or aggregate!

info <-
  files %>%
  lapply(. %>% read_file %>% extract(the_goodies))

Functions made this way can be indexed with [ to get a new function containing only a subset of the steps.

Lambda expressions

The new version makes it clearer that each step is really just a single-statement body of a unary function. What if we need a little more than one command to make a satisfactory “step” in a chain? Before, one might either define a function outside the chain, or even anonymously inside the chain, enclosing the entire definition in parentheses. Now extending that one command is like extending a standard one-command function: enclose whatever you’d like in braces, and that’s it:

value %>%
  foo %>% {
    x <- bar(.)
    y <- baz(.)
    x * y
  } %>%

As usual, the name of the argument to that unary function is ..

Nested function calls

In this release the dot (.) will work also in nested function calls on the right-hand side, e.g.:

1:5 %>% 
#> [1] "1 a" "2 b" "3 c" "4 d" "5 e"

When you use . inside a function call, it’s used in addition to, not instead of, . at the top-level. For example, the previous command is equivalent to:

1:5 %>% 
  paste(., letters[.])
#> [1] "1 a" "2 b" "3 c" "4 d" "5 e"

If you don’t want this behaviour, wrap the function call in {:

1:5 %>% {
#> [1] "a" "b" "c" "d" "e"

A few of %>%’s friends

We also introduce a few operators. These are supplementary operators that just make some situations more comfortable.
The tee operator, %T>%, enables temporary branching in a pipeline to apply a few side-effect commands to the current value, like plotting or logging, and is inspired by the Unix tee command. The only difference to %>% is that %T>% returns the left-hand side rather than the result of applying the right-hand side:

value %>%
  transform %T>%
  plot %>%

This is a shortcut for:

value %>%
  transform %>%
  { plot(.); . } %>%

because plot() doesn’t normally return anything that can be piped along!
The exposition operator, %$%, is a wrapper around with(),
which makes it easy to refer to the variables inside a data frame:

mtcars %$%
  plot(mpg, wt)

Finally, we also have %<>%, the compound assignment pipe operator. This must be the first operator in the chain, and it will assign the result of the pipeline to the left-hand side name or expression. It’s purpose is to shorten expressions like this:

data$some_variable <-
  data$some_variable %>%

and turn them into something like this:

data$some_variable %<>%

Even a small example like x %<>% sort has its appeal!
In summary there is a few new things to get to know; but magrittr is like it always was. Just a little coolr!

rvest is new package that makes it easy to scrape (or harvest) data from html web pages, inspired by libraries like beautiful soup. It is designed to work with magrittr so that you can express complex operations as elegant pipelines composed of simple, easily understood pieces. Install it with:


rvest in action

To see rvest in action, imagine we’d like to scrape some information about The Lego Movie from IMDB. We start by downloading and parsing the file with html():

lego_movie <- html("")

To extract the rating, we start with selectorgadget to figure out which css selector matches the data we want: strong span. (If you haven’t heard of selectorgadget, make sure to read vignette("selectorgadget") – it’s the easiest way to determine which selector extracts the data that you’re interested in.) We use html_node() to find the first node that matches that selector, extract its contents with html_text(), and convert it to numeric with as.numeric():

lego_movie %>% 
  html_node("strong span") %>%
  html_text() %>%
#> [1] 7.9

We use a similar process to extract the cast, using html_nodes() to find all nodes that match the selector:

lego_movie %>%
  html_nodes("#titleCast .itemprop span") %>%
#>  [1] "Will Arnett"     "Elizabeth Banks" "Craig Berry"    
#>  [4] "Alison Brie"     "David Burrows"   "Anthony Daniels"
#>  [7] "Charlie Day"     "Amanda Farinos"  "Keith Ferguson" 
#> [10] "Will Ferrell"    "Will Forte"      "Dave Franco"    
#> [13] "Morgan Freeman"  "Todd Hansen"     "Jonah Hill"

The titles and authors of recent message board postings are stored in a the third table on the page. We can use html_node() and [[ to find it, then coerce it to a data frame with html_table():

lego_movie %>%
  html_nodes("table") %>%
  .[[3]] %>%
#>                                              X 1            NA
#> 1 this movie is very very deep and philosophical   mrdoctor524
#> 2 This got an 8.0 and Wizard of Oz got an 8.1...  marr-justinm
#> 3                         Discouraging Building?       Laestig
#> 4                              LEGO - the plural      neil-476
#> 5                                 Academy Awards   browncoatjw
#> 6                    what was the funniest part? actionjacksin

Other important functions

  • If you prefer, you can use xpath selectors instead of css: html_nodes(doc, xpath = "//table//td")).

  • Extract the tag names with html_tag(), text with html_text(), a single attribute with html_attr() or all attributes with html_attrs().

  • Detect and repair text encoding problems with guess_encoding() and repair_encoding().

  • Navigate around a website as if you’re in a browser with html_session(), jump_to(), follow_link(), back(), and forward(). Extract, modify and submit forms with html_form(), set_values() and submit_form(). (This is still a work in progress, so I’d love your feedback.)

To see these functions in action, check out package demos with demo(package = "rvest").

RStudio has teamed up with O’Reilly media to create a new way to learn R!

The Introduction to Data Science with R video course is a comprehensive introduction to the R language. It’s ideal for non-programmers with no data science experience or for data scientists switching to R from Excel, SAS or other software.

Join RStudio Master Instructor Garrett Grolemund as he covers the three skill sets of data science: computer programming (with R), manipulating data sets (including loading, cleaning, and visualizing data), and modeling data with statistical methods. You’ll learn R’s syntax and grammar as well as how to load, save, and transform data, generate beautiful graphs, and fit statistical models to the data.

All of the techniques introduced in this video are motivated by real problems that involve real datasets. You’ll get plenty of hands-on experience with R (and not just hear about it!), and lots of help if you get stuck.

You’ll also learn how to use the ggplot2, reshape2, and dplyr packages.

The course contains over eight hours of instruction. You can access the first hour free from O’Reilly’s website. The course covers the same content as our two day Introduction to Data Science with R workshop, right down to the same exercises. But unlike our workshops, the videos are self-paced, which can help you learn R in a more relaxed way.

To learn more, visit Introduction to Data Science with R.

I’m very pleased to announce a new version of RSQLite 1.0.0. RSQLite is the easiest way to use SQL database from R:

# Create an ephemeral in-memory RSQLite database
con <- dbConnect(RSQLite::SQLite(), ":memory:")
# Copy in the buit-in mtcars data frame
dbWriteTable(con, "mtcars", mtcars, row.names = FALSE)
#> [1] TRUE

# Fetch all results from a query:
res <- dbSendQuery(con, "SELECT * FROM mtcars WHERE cyl = 4 AND mpg < 23")
#>    mpg cyl  disp  hp drat    wt  qsec vs am gear carb
#> 1 22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
#> 2 22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
#> 3 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
#> 4 21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2
#> [1] TRUE

# Or fetch them a chunk at a time
res <- dbSendQuery(con, "SELECT * FROM mtcars WHERE cyl = 4")
  chunk <- dbFetch(res, n = 10)
#> [1] 10
#> [1] 1
#> [1] TRUE

# Good practice to disconnect from the database when you're done
#> [1] TRUE

RSQLite 1.0.0 is mostly a cleanup release. This means a lot of old functions have been deprecated and removed:

  • idIsValid() is deprecated; use dbIsValid() instead. dbBeginTransaction() is deprecated; use dbBegin() instead. Use dbFetch() instead of fetch().
  • dbBuildTableDefinition() is now sqliteBuildTableDefinition() (to avoid implying that it’s a DBI generic).
  • Internal sqlite*() functions are no longer exported (#20). safe.write() is no longer exported.

It also includes a few minor improvements and bug fixes. The most important are:

  • Inlined RSQLite.extfuns – use initExtension() to load the many useful extension functions.
  • Methods no longer automatically clone the connection is there is an open result set. This was implemented inconsistently in a handful of places. RSQLite is now more forgiving if you forget to close a result set – it will close it for you, with a warning. It’s still good practice to clean up after yourself with dbClearResults(), but you don’t have to.
  • dbBegin(), dbCommit() and dbRollback() throw errors on failure, rather than returning FALSE. They all gain a name argument to specify named savepoints.
  • dbWriteTable() has been rewritten. It uses a better quoting strategy, throws errors on failure, and only automatically adds row names only if they’re strings. (NB: dbWriteTable() also has a method that allows you load files directly from disk.)

For a complete list of changes, please see the full release notes.


RStudio has teamed up with Datacamp to create a new, interactive way to learn dplyr. Dplyr is an R package that provides a fast, intuitive way to transform data sets with R. It introduces five functions, optimized in C++, that can handle ~90% of data manipulation tasks. These functions are lightning fast, which lets you accomplish more things—with more data—than you could otherwise. They are also designed to be intuitive and easy to learn, which makes R more user friendly. But this is just the beginning. Dplyr also automates groupwise operations in R, provides a standard syntax for accessing and manipulating database data with R, and much more.

In the course, you will learn how to use dplyr to

  • select() variables and filter() observations from your data in a targeted way
  • arrange() observations within your data set by value
  • derive new variables from your data with mutate()
  • create summary statistics with summarise()
  • perform groupwise operations with group_by()
  • use the dplyr syntax to access data stored in a database outside of R.

You will also practice using the tbl data structure and the new pipe operator in R, %>%.

The course is taught by Garrett Grolemund, RStudio’s Master Instructor, and is organized around Datacamp’s interactive interface. You will receive expert instruction in short, clear videos as you work through a series of progressive exercises. As you work, the Datacamp interface will provide immediate feedback and hints, alerting you when you do something wrong and rewarding you when you do something right. The course is designed to take about 4 hours and requires only a basic familiarity with R.

This is the first course in a RStudio datacamp track that will cover dplyr, ggvis, rmarkdown, and the RStudio IDE. To enroll, visit the datacamp dplyr portal.

ggvis 0.4 is now available on CRAN. You can install it with:


The major features of this release are:

  • Boxplots, with layer_boxplots()
chickwts %>% ggvis(~feed, ~weight) %>% layer_boxplots()

ggvis box plot

  • Better stability when errors occur.
  • Better handling of empty data and malformed data.
  • More consistent handling of data in compute pipeline functions.

Because of these changes, interactive graphics with dynamic data sources will work more reliably.

Additionally, there are many small improvements and bug fixes under the hood. You can see the full change log here.


Get every new post delivered to your Inbox.

Join 12,252 other followers