As R users know, we’re continuously improving the RStudio IDE.  This includes RStudio Server Pro, where organizations who want to deploy the IDE at scale will find a growing set of features recently enhanced for them.

If you’re not already familiar with RStudio Server Pro here’s an updated summary page and a comparison to RStudio Server worth checking out. Or you can skip all of that and download a free 45 day evaluation right now!

WHAT’S NEW IN RSTUDIO SERVER PRO (v0.98.1091)

Naturally, the latest RStudio Server Pro has all of the new features found in the open source server version of the RStudio IDE. They include improvements to R Markdown document and Shiny app creation, making R package development easier, better debugging and source editing, and support for Internet Explorer 10 and 11 and RHEL 7.

Recently, we added even more powerful features exclusively for RStudio Server Pro:

  • Load balancing based on factors you control. Load balancing ensures R users are automatically assigned to the best available server in a cluster.
  • Flexible resource allocation by user or group. Now you can allocate cores, set scheduler priority, control the version(s) of R and enforce memory and CPU limits.
  • New security enhancements. Leverage PAM to issue Kerberos tickets, move Google Accounts support to OAuth 2.0, and allow administrators to disable access to various features.

For a full list of what’s changed in more depth, make sure to read the RStudio Server Pro admin guide.

THE RSTUDIO SERVER PRO BASICS

In addition to the newest features above there are many more that make RStudio Server Pro an upgrade to the open source IDE. Here’s a quick list:

  • An administrative dashboard that provides insight into active sessions, server health, and monitoring of system-wide and per-user performance and resources
  • Authentication using system accounts, ActiveDirectory, LDAP, or Google Accounts
  • Full support for the Pluggable Authentication Module (PAM)
  • HTTP enhancements add support for SSL and keep-alive for improved performance
  • Ability to restrict access to the server by IP
  • Customizable server health checks
  • Suspend, terminate, or assume control of user sessions for assistance and troubleshooting

That’s a lot to discover! Please download the newest version of RStudio Server Pro and as always let us know how it’s working and what else you’d like to see.

I’m very pleased to announce that dplyr 0.4.0 is now available from CRAN. Get the latest version by running:

install.packages("dplyr")

dplyr 0.4.0 includes over 80 minor improvements and bug fixes, which are described in detail in the release notes. Here I wanted to draw your attention to two areas that have particularly improved since dplyr 0.3, two-table verbs and data frame support.

Two table verbs

dplyr now has full support for all two-table verbs provided by SQL:

  • Mutating joins, which add new variables to one table from matching rows in another: inner_join(), left_join(), right_join(), full_join(). (Support for non-equi joins is planned for dplyr 0.5.0.)
  • Filtering joins, which filter observations from one table based on whether or not they match an observation in the other table: semi_join(), anti_join().
  • Set operations, which combine the observations in two data sets as if they were set elements: intersect(), union(), setdiff().

Together, these verbs should allow you to solve 95% of data manipulation problems that involve multiple tables. If any of the concepts are unfamiliar to you, I highly recommend reading the two-table vignette (and if you still don’t understand, please let me know so I can make it better.)

Data frames

dplyr wraps data frames in a tbl_df class. These objects are structured in exactly the same way as regular data frames, but their behaviour has been tweaked a little to make them easier to work with. The new data_frames vignette describes how dplyr works with data frames in general, and below I highlight some of the features new in 0.4.0.

Printing

The biggest difference is printing: print.tbl_df() doesn’t try and print 10,000 rows! Printing got a lot of love in dplyr 0.4 and now:

  • All print() method methods invisibly return their input so you can interleave print() statements into a pipeline to see interim results.
  • If you’ve managed to produce a 0-row data frame, dplyr won’t try to print the data, but will tell you the column names and types:
    data_frame(x = numeric(), y = character())
    #> Source: local data frame [0 x 2]
    #> 
    #> Variables not shown: x (dbl), y (chr)
  • dplyr never prints row names since no dplyr method is guaranteed to preserve them:
    df <- data.frame(x = c(a = 1, b = 2, c = 3))
    df
    #>   x
    #> a 1
    #> b 2
    #> c 3
    df %>% tbl_df()
    #> Source: local data frame [3 x 1]
    #> 
    #>   x
    #> 1 1
    #> 2 2
    #> 3 3

    I don’t think using row names is a good idea because it violates one of the principles of tidy data: every variable should be stored in the same way.

    To make life a bit easier if you do have row names, you can use the new add_rownames() to turn your row names into a proper variable:

    df %>% 
      add_rownames()
    #>   rowname x
    #> 1       a 1
    #> 2       b 2
    #> 3       c 3

    (But you’re better off never creating them in the first place.)

  • options(dplyr.print_max) is now 20, so dplyr will never print more than 20 rows of data (previously it was 100). The best way to see more rows of data is to use View().

Coercing lists to data frames

When you have a list of vectors of equal length that you want to turn into a data frame, dplyr provides as_data_frame() as a simple alternative to as.data.frame(). as_data_frame() is considerably faster than as.data.frame() because it does much less:

l <- replicate(26, sample(100), simplify = FALSE)
names(l) <- letters
microbenchmark::microbenchmark(
  as_data_frame(l),
  as.data.frame(l)
)
#> Unit: microseconds
#>              expr      min        lq   median        uq      max neval
#>  as_data_frame(l)  101.856  112.0615  124.855  143.0965  254.193   100
#>  as.data.frame(l) 1402.075 1466.6365 1511.644 1635.1205 3007.299   100

It’s difficult to precisely describe what as.data.frame(x) does, but it’s similar to do.call(cbind, lapply(x, data.frame)) – it coerces each component to a data frame and then cbind()s them all together.

The speed of as.data.frame() is not usually a bottleneck in interactive use, but can be a problem when combining thousands of lists into one tidy data frame (this is common when working with data stored in json or xml).

Binding rows and columns

dplyr now provides bind_rows() and bind_cols() for binding data frames together. Compared to rbind() and cbind(), the functions:

  • Accept either individual data frames, or a list of data frames:
    a <- data_frame(x = 1:5)
    b <- data_frame(x = 6:10)
    
    bind_rows(a, b)
    #> Source: local data frame [10 x 1]
    #> 
    #>    x
    #> 1  1
    #> 2  2
    #> 3  3
    #> 4  4
    #> 5  5
    #> .. .
    bind_rows(list(a, b))
    #> Source: local data frame [10 x 1]
    #> 
    #>    x
    #> 1  1
    #> 2  2
    #> 3  3
    #> 4  4
    #> 5  5
    #> .. .

    If x is a list of data frames, bind_rows(x) is equivalent to do.call(rbind, x).

  • Are much faster:
    dfs <- replicate(100, data_frame(x = runif(100)), simplify = FALSE)
    microbenchmark::microbenchmark(
      do.call("rbind", dfs),
      bind_rows(dfs)
    )
    #> Unit: microseconds
    #>                   expr      min        lq   median        uq       max
    #>  do.call("rbind", dfs) 5344.660 6605.3805 6964.236 7693.8465 43457.061
    #>         bind_rows(dfs)  240.342  262.0845  317.582  346.6465  2345.832
    #>  neval
    #>    100
    #>    100

(Generally you should avoid bind_cols() in favour of a join; otherwise check carefully that the rows are in a compatible order).

List-variables

Data frames are usually made up of a list of atomic vectors that all have the same length. However, it’s also possible to have a variable that’s a list, which I call a list-variable. Because of data.frame()s complex coercion rules, the easiest way to create a data frame containing a list-column is with data_frame():

data_frame(x = 1, y = list(1), z = list(list(1:5, "a", "b")))
#> Source: local data frame [1 x 3]
#> 
#>   x        y         z
#> 1 1 <dbl[1]> <list[3]>

Note how list-variables are printed: a list-variable could contain a lot of data, so dplyr only shows a brief summary of the contents. List-variables are useful for:

  • Working with summary functions that return more than one value:
    qs <- mtcars %>%
      group_by(cyl) %>%
      summarise(y = list(quantile(mpg)))
    
    # Unnest input to collpase into rows
    qs %>% tidyr::unnest(y)
    #> Source: local data frame [15 x 2]
    #> 
    #>    cyl    y
    #> 1    4 21.4
    #> 2    4 22.8
    #> 3    4 26.0
    #> 4    4 30.4
    #> 5    4 33.9
    #> .. ...  ...
    
    # To extract individual elements into columns, wrap the result in rowwise()
    # then use summarise()
    qs %>% 
      rowwise() %>% 
      summarise(q25 = y[2], q75 = y[4])
    #> Source: local data frame [3 x 2]
    #> 
    #>     q25   q75
    #> 1 22.80 30.40
    #> 2 18.65 21.00
    #> 3 14.40 16.25
  • Keeping associated data frames and models together:
    by_cyl <- split(mtcars, mtcars$cyl)
    models <- lapply(by_cyl, lm, formula = mpg ~ wt)
    
    data_frame(cyl = c(4, 6, 8), data = by_cyl, model = models)
    #> Source: local data frame [3 x 3]
    #> 
    #>   cyl            data   model
    #> 1   4 <S3:data.frame> <S3:lm>
    #> 2   6 <S3:data.frame> <S3:lm>
    #> 3   8 <S3:data.frame> <S3:lm>

dplyr’s support for list-variables continues to mature. In 0.4.0, you can join and row bind list-variables and you can create them in summarise and mutate.

My vision of list-variables is still partial and incomplete, but I’m convinced that they will make pipeable APIs for modelling much eaiser. See the draft lowliner package for more explorations in this direction.

Bonus

My colleague, Garrett, helped me make a cheat sheet that summarizes the data wrangling features of dplyr 0.4.0. You can download it from RStudio’s new gallery of R cheat sheets.

Data wrangling cheatsheet

Jeroen Ooms and I are very pleased to announce a new version of RMySQL, the R package that allows you to talk to MySQL (and MariaDB) databases. We have taken over maintenance from Jeffrey Horner, who has done a great job of maintaining the package of the last few years, but no longer has time to look after it. Thanks for all your hard work Jeff!

Using RMySQL

library(DBI)

# Connect to a public database that I'm running on Google's 
# cloud SQL service. It contains a copy of the data in the
# datasets package.
con <-  dbConnect(RMySQL::MySQL(), 
  username = "public", 
  password = "F60RUsyiG579PeKdCH",
  host = "173.194.227.144", 
  port = 3306, 
  dbname = "datasets"
)

# Run a query
dbGetQuery(con, "SELECT * FROM mtcars WHERE cyl = 4 AND mpg < 23")
#>       row_names  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
#> 1    Datsun 710 22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
#> 2      Merc 230 22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
#> 3 Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
#> 4    Volvo 142E 21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2

# It's polite to let the database know when you're done
dbDisconnect(con)
#> [1] TRUE

It’s generally a bad idea to put passwords in your code, so instead of typing them directly, you can create a file called ~/.my.cnf that contains

[cloudSQL]
username=public
password=F60RUsyiG579PeKdCH
host=173.194.227.144
port=3306
database=datasets

Then you can connect with:

con <-  dbConnect(RMySQL::MySQL(), group = "cloudSQL")

Changes in this release

RMySQL 0.10.0 is mostly a cleanup release. RMySQL is one of the oldest packages on CRAN, and according to the timestamps, it is older than many recommended packages, and only slightly younger than MASS! That explains why a facelift was well overdue.

The most important change is an improvement to the build process so that CRAN binaries are now available for Windows and OS X Mavericks. This should make your life much easier if you’re on one of these platforms. We’d love your feedback on the new build scripts. There have been many problems in the past, so we’d like to know that this client works well across platforms and versions of MySQL server.

Otherwise, the changes update RMySQL for DBI 0.3 compatibility:

  • Internal mysql*() functions are no longer exported. Please use the corresponding DBI generics instead.
  • RMySQL gains transaction support with dbBegin(), dbCommit(), and dbRollback(). (But note that MySQL does not allow data definition language statements to be rolled back.)
  • Added method for dbFetch(). Please use this instead of fetch(). dbFetch() now returns a 0-row data frame (instead of an 0-col data frame) if there are no results.
  • Added methods for dbIsValid(). Please use these instead of isIdCurrent().
  • dbWriteTable() has been rewritten. It uses a better quoting strategy, throws errors on failure, and only automatically adds row names only if they’re strings. (NB: dbWriteTable() also has a method that allows you load files directly from disk – this is likely to be faster if your file is one of the formats supported.)

For a complete list of changes, please see the full release notes.

ggplot2 1.0.0

As you might have noticed, ggplot2 recently turned 1.0.0. This release incorporated a handful of new features and bug fixes, but most importantly reflects that ggplot2 is now a mature plotting system and it will not change significantly in the future.

This does not mean ggplot2 is dead! The ggplot2 community is rich and vibrant and the number of packages that build on top of ggplot2 continues to grow. We are committed to maintaining ggplot2 so that you can continue to rely on it for years to come.

The ggplot2 book

Since ggplot2 is now stable, and the ggplot2 book is over five years old and rather out of date, I’m also happy to announce that I’m working on a second edition. I’ll be ably assisted in this endeavour by Carson Sievert, who’s so far done a great job of converting the source to Rmd and updating many of the examples to work with ggplot2 1.0.0. In the coming months we’ll be rewriting the data chapter to reflect modern best practices (e.g. tidyr and dplyr), and adding sections about new features.

We’d love your help! The source code for the book is available on github. If you’ve spotted any mistakes in the first edition that you’d like to correct, we’d really appreciate a pull request. If there’s a particular section of the book that you think needs an update (or is just plain missing), please let us know by filing an issue. Unfortunately we can’t turn the book into a free website because of my agreement with the publisher, but at least you can now get easily get to the source.

RStudio is happy to announce the availability of the shinyapps.io beta.

Shinyapps.io is an easy to use, secure, and scalable hosted service already being used by thousands of professionals and students to deploy Shiny applications on the web. Today we are releasing a significant upgrade as we transition from alpha to beta, the final step before general availability (GA) later this quarter.

New Feature Highlights in shinyapps.io beta

  • Secure and manage authorized users with support for new authentication systems, including Google, GitHub, or a shinyapps.io account.
  • Tune application performance by controlling the resources available. Run multiple R processes per application instance and add application instances.
  • Track performance metrics and simplify application management in a new shinyapps.io dashboard. See an application’s active connections, CPU, memory, and network usage. Review application logs, start, stop, restart, rebuild and archive applications all from one convenient place.

During the beta period, these and all other features in shinyapps.io are available at no charge. At the end of the beta, users may subscribe to a plan of their choice or transition their applications to the free plan.

If you do not already have an account, we encourage anyone developing Shiny applications to consider shinyapps.io beta and appreciate any and all feedback on our features or proposed packaging and pricing.

Happy New Year!

 

Give yourself the gift of “mastering” R to start 2015!

Join RStudio Chief Data Scientist Hadley Wickham at the Westin San Francisco on January 19 and 20 for this rare opportunity to learn from one of the R community’s most popular and innovative authors and package developers.

As of this post, the workshop is two-thirds sold out. If you’re in or near California and want to boost your R programming skills, this is Hadley’s only West Coast public workshop planned for 2015.

Register here: http://rstudio-sfbay.eventbrite.com/

Today we’re excited to announce htmlwidgets, a new framework that brings the best of JavaScript data visualization libraries to R. There are already several packages that take advantage of the framework (leaflet, dygraphs, networkD3, DataTables, and rthreejs) with hopefully many more to come.

An htmlwidget works just like an R plot except it produces an interactive web visualization. A line or two of R code is all it takes to produce a D3 graphic or Leaflet map. Widgets can be used at the R console as well as embedded in R Markdown reports and Shiny web applications. Here’s an example of using leaflet directly from the R console:

rconsole.2x

When printed at the console the leaflet widget displays in the RStudio Viewer pane. All of the tools typically available for plots are also available for widgets, including history, zooming, and export to file/clipboard (note that when not running within RStudio widgets will display in an external web browser).

Here’s the same widget in an R Markdown report. Widgets automatically print as HTML within R Markdown documents and even respect the default knitr figure width and height.

rmarkdown.2x

Widgets also provide Shiny output bindings so can be easily used within web applications. Here’s the same widget in a Shiny application:

shiny.2x

Bringing JavaScript to R

The htmlwidgets framework is a collaboration between Ramnath Vaidyanathan (rCharts), Kenton Russell (Timely Portfolio), and RStudio. We’ve all spent countless hours creating bindings between R and the web and were motivated to create a framework that made this as easy as possible for all R developers.

There are a plethora of libraries available that create attractive and fully interactive data visualizations for the web. However, the programming interface to these libraries is JavaScript, which places them outside the reach of nearly all statisticians and analysts. htmlwidgets makes it extremely straightforward to create an R interface for any JavaScript library.

Here are a few widget libraries that have been built so far:

  • leaflet, a library for creating dynamic maps that support panning and zooming, with various annotations like markers, polygons, and popups.
  • dygraphs, which provides rich facilities for charting time-series data and includes support for many interactive features including series/point highlighting, zooming, and panning.
  • networkD3, a library for creating D3 network graphs including force directed networks, Sankey diagrams, and Reingold-Tilford tree networks.
  • DataTables, which displays R matrices or data frames as interactive HTML tables that support filtering, pagination, and sorting.
  • rthreejs, which features 3D scatterplots and globes based on WebGL.

All of these libraries combine visualization with direct interactivity, enabling users to explore data dynamically. For example, time-series visualizations created with dygraphs allow dynamic panning and zooming:

NewHavenTemps

Learning More

To learn more about the framework and see a showcase of the available widgets in action check out the htmlwidgets web site. To learn more about building your own widgets, install the htmlwidgets package from CRAN and check out the developer documentation.

 

httr 0.6.0 is now available on CRAN. The httr packages makes it easy to talk to web APIs from R. Learn more in the quick start vignette.

This release is mostly bug fixes and minor improvements. The most important are:

  • handle_reset(), which allows you to reset the default handle if you get the error “easy handle already used in multi handle”.
  • write_stream() which lets you process the response from a server as a stream of raw vectors (#143).
  • VERB() allows to you send a request with a custom http verb.
  • brew_dr() checks for common problems. It currently checks if your libcurl uses NSS. This is unlikely to work so it gives you some advice on how to fix the problem (thanks to Dirk Eddelbuettel for debugging this problem and suggesting a remedy).
  • Added support for Google OAuth2 service accounts. (#119, thanks to help from @siddharthab). See ?oauth_service_token for details.

I’ve also switched from RC to R6 (which should make it easier to extend OAuth classes for non-standard OAuth implementations), and tweaked the use of the backend SSL certificate details bundled with httr. See the release notes for complete details.

tidyr 0.2.0 is now available on CRAN. tidyr makes it easy to “tidy” your data, storing it in a consistent form so that it’s easy to manipulate, visualise and model. Tidy data has variables in columns and observations in rows, and is described in more detail in the tidy data vignette. Install tidyr with:

install.packages("tidyr")

There are three important additions to tidyr 0.2.0:

  • expand() is a wrapper around expand.grid() that allows you to generate all possible combinations of two or more variables. In conjunction with dplyr::left_join(), this makes it easy to fill in missing rows of data.
    sales <- dplyr::data_frame(
      year = rep(c(2012, 2013), c(4, 2)),
      quarter = c(1, 2, 3, 4, 2, 3), 
      sales = sample(6) * 100
    )
    
    # Missing sales data for 2013 Q1 & Q4
    sales
    #> Source: local data frame [6 x 3]
    #> 
    #>   year quarter sales
    #> 1 2012       1   400
    #> 2 2012       2   200
    #> 3 2012       3   500
    #> 4 2012       4   600
    #> 5 2013       2   300
    #> 6 2013       3   100
    
    # Missing values are now explicit
    sales %>% 
      expand(year, quarter) %>%
      dplyr::left_join(sales)
    #> Joining by: c("year", "quarter")
    #> Source: local data frame [8 x 3]
    #> 
    #>   year quarter sales
    #> 1 2012       1   400
    #> 2 2012       2   200
    #> 3 2012       3   500
    #> 4 2012       4   600
    #> 5 2013       1    NA
    #> 6 2013       2   300
    #> 7 2013       3   100
    #> 8 2013       4    NA
  • In the process of data tidying, it’s sometimes useful to have a column of a data frame that is a list of vectors. unnest() lets you simplify that column back down to an atomic vector, duplicating the original rows as needed. (NB: If you’re working with data frames containing lists, I highly recommend using dplyr’s tbl_df, which will display list-columns in a way that makes their structure more clear. Use dplyr::data_frame() to create a data frame wrapped with the tbl_df class.)
    raw <- dplyr::data_frame(
      x = 1:3,
      y = c("a", "d,e,f", "g,h")
    )
    # y is character vector containing comma separated strings
    raw
    #> Source: local data frame [3 x 2]
    #> 
    #>   x     y
    #> 1 1     a
    #> 2 2 d,e,f
    #> 3 3   g,h
    
    # y is a list of character vectors
    as_list <- raw %>% mutate(y = strsplit(y, ","))
    as_list
    #> Source: local data frame [3 x 2]
    #> 
    #>   x        y
    #> 1 1 <chr[1]>
    #> 2 2 <chr[3]>
    #> 3 3 <chr[2]>
    
    # y is a character vector; rows are duplicated as needed
    as_list %>% unnest(y)
    #> Source: local data frame [6 x 2]
    #> 
    #>   x y
    #> 1 1 a
    #> 2 2 d
    #> 3 2 e
    #> 4 2 f
    #> 5 3 g
    #> 6 3 h
  • separate() has a new extra argument that allows you to control what happens if a column doesn’t always split into the same number of pieces.
    raw %>% separate(y, c("trt", "B"), ",")
    #> Error: Values not split into 2 pieces at 1, 2
    raw %>% separate(y, c("trt", "B"), ",", extra = "drop")
    #> Source: local data frame [3 x 3]
    #> 
    #>   x trt  B
    #> 1 1   a NA
    #> 2 2   d  e
    #> 3 3   g  h
    raw %>% separate(y, c("trt", "B"), ",", extra = "merge")
    #> Source: local data frame [3 x 3]
    #> 
    #>   x trt   B
    #> 1 1   a  NA
    #> 2 2   d e,f
    #> 3 3   g   h

To read about the other minor changes and bug fixes, please consult the release notes.

reshape2 1.4.1

There’s also a new version of reshape2, 1.4.1. It includes three bug fixes for melt.data.frame() contributed by Kevin Ushey. Read all about them on the release notes and install it with:

install.packages("reshape2")

We’ve teamed up with DataCamp to make a self-paced online course that teaches ggvis, the newest data visualization package by Hadley Wickham and Winston Chang. The ggvis course pairs challenging exercises, interactive feedback, and “to the point” videos to let you learn ggvis in a guided way.

In the course, you will learn how to make and customize graphics with ggvis. You’ll learn the commands and syntax that ggvis uses to build graphics, and you’ll learn the theory that underlies ggvis. ggvis implements the grammar of graphics, a logical method for building graphs that is easy to use and to extend. Finally, since this is ggvis, you’ll learn to make interactive graphics with sliders and other user controls.

The first part of the tutorial is available for free, so you can start learning immediately.

Follow

Get every new post delivered to your Inbox.

Join 12,505 other followers