You are currently browsing Garrett.Grolemund’s articles.

We’ve added two new tools that make it even easier to learn Shiny.

Video tutorial

01-How-to-start

The How to Start with Shiny training video provides a new way to teach yourself Shiny. The video covers everything you need to know to build your own Shiny apps. You’ll learn:

  • The architecture of a Shiny app
  • A template for making apps quickly
  • The basics of building Shiny apps
  • How to add sliders, drop down menus, buttons, and more to your apps
  • How to share Shiny apps
  • How to control reactions in your apps to
    • update displays
    • trigger code
    • reduce computation
    • delay reactions
  • How to add design elements to your apps
  • How to customize the layout of an app
  • How to style your apps with CSS

Altogether, the video contains two hours and 25 minutes of material organized around a navigable table of contents.

Best of all, the video tutorial is completely free. The video is the result of our recent How to Start Shiny webinar series. Thank you to everyone who attended and made the series a success!

Watch the new video tutorial here.

New cheat sheet

The new Shiny cheat sheet provides an up-to-date reference to the most important Shiny functions.

shiny-cheatsheet

The cheat sheet replaces the previous cheat sheet, adding new sections on single-file apps, reactivity, CSS and more. The new sheet also gave us a chance to apply some of the things we’ve learned about making cheat sheets since the original Shiny cheat sheet came out.

Get the new Shiny cheat sheet here.

This is a guest post by Vincent Warmerdam of koaning.io.​

SparkR preview in Rstudio

Apache Spark is the hip new technology on the block. It allows you to write scripts in a functional style and the technology behind it will allow you to run iterative tasks very quickly on a cluster of machines. It’s benchmarked to be quicker than hadoop for most machine learning use cases (by a factor between 10-100) and soon Spark will also have support for the R language. As of April 2015, SparkR has been merged into Apache Spark and is shipping with a new version in an upcoming release (1.4) due early summer 2015. In the meanwhile, you can use this tutorial to go ahead and get familiar with the current version of SparkR.

**Disclaimer** : although you will be able to use this tutorial to write Spark jobs right now with R, the new api due this summer will most likely have breaking changes.

Running Spark Locally

The main feature of Spark is the resilient distributed dataset, which is a dataset that can be queried in memory, in parallel on a cluster of machines. You don’t need a cluster of machines to get started with Spark though. Even on a single machine, Spark is able to efficiently use any configured resources. To keep it simple we will ignore this configuration for now and do a quick one-click install. You can use devtools to download and install Spark with SparkR.

library(devtools)
install_github("amplab-extras/SparkR-pkg", subdir="pkg")

This might take a while. But after the installation, the following R code will run Spark jobs for you:

library(magrittr)
library(SparkR)

sc <- sparkR.init(master="local")

sc %>% 
  parallelize(1:100000) %>%
  count

This small program generates a list, gives it to Spark (which turns it into an RDD, Spark’s Resilient Distributed Dataset structure) and then counts the number of items in it. SparkR exposes the RDD API of Spark as distributed lists in R, which plays very nicely with magrittr. As long as you follow the API, you don’t need to worry much about parallelizing for performance for your programs.

A more elaborate example

Spark also allows for grouped operations, which might remind you a bit of dplyr.

nums = runif(100000) * 10

sc %>% 
  parallelize(nums) %>% 
  map(function(x) round(x)) %>%
  filterRDD(function(x) x %% 2) %>% 
  map(function(x) list(x, 1)) %>%
  reduceByKey(function(x,y) x + y, 1L) %>% 
  collect

The Spark API will look very ‘functional’ to programmers used to functional programming languages (which should come to no suprise if you know that Spark is written in Scala). This script will do the following;

  1. it will create a RRD Spark object from the original data
  2. it will map each number to a rounded number
  3. it will filter all even numbers out or the RDD
  4. next it will create key/value pairs that can be counted
  5. it then reduces the key value pairs (the 1L is the number of partitions for the resulting RDD)
  6. and it collects the results

Spark will have started running services locally on your computer, which can be viewed at http://localhost:4040/stages/. You should be able to see all the jobs you’ve run here. You will also see which jobs have failed with the error log.

Bootstrapping with Spark

These examples are nice, but you can also use the power of Spark for more common data science tasks. Let’s sample a dataset to generate a large RDD, which we will then summarise via bootstrapping. Instead of parallelizing numbers, I will now parallelize dataframe samples.

sc <- sparkR.init(master="local")

sample_cw <- function(n, s){
  set.seed(s)
  ChickWeight[sample(nrow(ChickWeight), n), ]
}

data_rdd <- sc %>%
  parallelize(1:200, 20) %>% 
  map(function(s) sample_cw(250, s))

For the parallelize function we can assign the number of partitions Spark can use for the resulting RDD. My s argument ensures that each partition will use a different random seed when sampling. This data_rdd is useful, because it can be reused for multiple purposes.

You can use it to estimate the mean of the weight.

data_rdd %>% 
  map(function(x) mean(x$weight)) %>% 
  collect %>% 
  as.numeric %>% 
  hist(20, main="mean weight, bootstrap samples")

Or you can use it to perform bootstrapped regressions.

train_lm <- function(data_in){
  lm(data=data_in, weight ~ Time)
}

coef_rdd <- data_rdd %>% 
  map(train_lm) %>% 
  map(function(x) x$coefficients) 

get_coef <- function(k) { 
  code_rdd %>%  
    map(function(x) x[k]) %>% 
    collect %>%
    as.numeric
}

df <- data.frame(intercept = get_coef(1), time_coef = get_coef(2))
df$intercept %>% hist(breaks = 30, main="beta coef for intercept")
df$time_coef %>% hist(breaks = 30, main="beta coef for time")

The slow part of this tree of operations is the creation of the data, because this has to occur locally through R. A more common use case for Spark would be to load a large dataset from S3 which connects to a large EC2 cluster of Spark machines.

More power?

Running Spark locally is nice and will already allow for parallelism, but the real profit can be gained by running Spark on a cluster of computers. The nice thing is that Spark automatically comes with a script that will automate the provisioning of a Spark cluster on Amazon AWS.

To get a cluster started; start up an EC2 cluster with the supplied ec2 folder from Apache’s Spark github repo. A more elaborate tutorial can be found here, but if you already are an Amazon user, provisioning a cluster on Amazon is as simple as calling a one-liner:

./spark-ec2 \
--key-pair=spark-df \
--identity-file=/path/spark-df.pem \
--region=eu-west-1 \
-s 3 \
--instance-type c3.2xlarge \
launch my-spark-cluster

If you ssh in the master node that has just been setup you can run the following code:

cd /root
git clone https://github.com/amplab-extras/SparkR-pkg.git
cd SparkR-pkg
SPARK_VERSION=1.2.1 ./install-dev.sh
cp -a /root/SparkR-pkg/lib/SparkR /usr/share/R/library/
/root/spark-ec2/copy-dir /root/SparkR-pkg
/root/spark/sbin/slaves.sh cp -a /root/SparkR-pkg/lib/SparkR /usr/share/R/library/

Launch SparkR on a Cluster

Finally to launch SparkR and connect to the Spark EC2 cluster, we run the following code on the master machine:

MASTER=spark://:7077 ./sparkR

The hostname can be retrieved using:

cat /root/spark-ec2/cluster-url

You can check on the status of your cluster via Spark’s Web UI at http://:8080.

The future

Everything described in this document is subject to changes with the next Spark release, but should help you feel familiar on how Spark works. There will be R support for Spark, less so for low level RDD operations but more so for its distributed machine learning algorithms as well as DataFrame objects.

The support for R in the Spark universe might be a game changer. R has always been great on doing exploratory and interactive analysis on small to medium datasets. With the addition of Spark, R can become a more viable tool for big datasets.

June is the current planned release date for Spark 1.4 which will allow R users to run data frame operations in parallel on the distributed memory of a cluster of computers. All of which is completely open source.

It will be interesting to see what possibilities this brings for the R community.

action-button

Action buttons can be tricky to use in Shiny because they work differently than other widgets. Widgets like sliders and select boxes maintain a value that is easy to use in your code. But the value of an action button is arbitrary. What should you do with it? Did you know that you should almost always call the value of an action button from observeEvent() or eventReactive()?

The newest article at the Shiny Development Center explains how action buttons work, and it provides five useful patterns for working with action buttons. These patterns also work well with action links.

Read the article here.

data visualization cheatsheet

We’ve added a new cheatsheet to our collection. Data Visualization with ggplot2 describes how to build a plot with ggplot2 and the grammar of graphics. You will find helpful reminders of how to use:

  • geoms
  • stats
  • scales
  • coordinate systems
  • facets
  • position adjustments
  • legends, and
  • themes

The cheatsheet also documents tips on zooming.

Download the cheatsheet here.

Bonus – Frans van Dunné of Innovate Online has provided Spanish translations of the Data Wrangling, R Markdown, Shiny, and Package Development cheatsheets. Download them at the bottom of the cheatsheet gallery.

Cheatsheet

We’ve added a new cheatsheet to our collection! Package Development with devtools will help you find the most useful functions for building packages in R. The cheatsheet will walk you through the steps of building a package from:

  • Setting up the package structure
  • Adding a DESCRIPTION file
  • Writing code
  • Writing tests
  • Writing documentation with roxygen
  • Adding data sets
  • Building a NAMESPACE, and
  • Including vignettes

The sheet focuses on Hadley Wickham’s devtools package, and it is a useful supplement to Hadley’s book R Packages, which you can read online at r-pkgs.had.co.nz.

Download the sheet here.

Bonus – Vivian Zhang of SupStat Analytics has kindly translated the existing Data Wrangling, R Markdown, and Shiny cheatsheets into Chinese. You can download the translations at the bottom of the cheatsheet gallery.

We’ve teamed up with DataCamp to make a self-paced online course that teaches ggvis, the newest data visualization package by Hadley Wickham and Winston Chang. The ggvis course pairs challenging exercises, interactive feedback, and “to the point” videos to let you learn ggvis in a guided way.

In the course, you will learn how to make and customize graphics with ggvis. You’ll learn the commands and syntax that ggvis uses to build graphics, and you’ll learn the theory that underlies ggvis. ggvis implements the grammar of graphics, a logical method for building graphs that is easy to use and to extend. Finally, since this is ggvis, you’ll learn to make interactive graphics with sliders and other user controls.

The first part of the tutorial is available for free, so you can start learning immediately.

RStudio has teamed up with O’Reilly media to create a new way to learn R!

The Introduction to Data Science with R video course is a comprehensive introduction to the R language. It’s ideal for non-programmers with no data science experience or for data scientists switching to R from Excel, SAS or other software.

Join RStudio Master Instructor Garrett Grolemund as he covers the three skill sets of data science: computer programming (with R), manipulating data sets (including loading, cleaning, and visualizing data), and modeling data with statistical methods. You’ll learn R’s syntax and grammar as well as how to load, save, and transform data, generate beautiful graphs, and fit statistical models to the data.

All of the techniques introduced in this video are motivated by real problems that involve real datasets. You’ll get plenty of hands-on experience with R (and not just hear about it!), and lots of help if you get stuck.

You’ll also learn how to use the ggplot2, reshape2, and dplyr packages.

The course contains over eight hours of instruction. You can access the first hour free from O’Reilly’s website. The course covers the same content as our two day Introduction to Data Science with R workshop, right down to the same exercises. But unlike our workshops, the videos are self-paced, which can help you learn R in a more relaxed way.

To learn more, visit Introduction to Data Science with R.

datacamp-dplyr

RStudio has teamed up with Datacamp to create a new, interactive way to learn dplyr. Dplyr is an R package that provides a fast, intuitive way to transform data sets with R. It introduces five functions, optimized in C++, that can handle ~90% of data manipulation tasks. These functions are lightning fast, which lets you accomplish more things—with more data—than you could otherwise. They are also designed to be intuitive and easy to learn, which makes R more user friendly. But this is just the beginning. Dplyr also automates groupwise operations in R, provides a standard syntax for accessing and manipulating database data with R, and much more.

In the course, you will learn how to use dplyr to

  • select() variables and filter() observations from your data in a targeted way
  • arrange() observations within your data set by value
  • derive new variables from your data with mutate()
  • create summary statistics with summarise()
  • perform groupwise operations with group_by()
  • use the dplyr syntax to access data stored in a database outside of R.

You will also practice using the tbl data structure and the new pipe operator in R, %>%.

The course is taught by Garrett Grolemund, RStudio’s Master Instructor, and is organized around Datacamp’s interactive interface. You will receive expert instruction in short, clear videos as you work through a series of progressive exercises. As you work, the Datacamp interface will provide immediate feedback and hints, alerting you when you do something wrong and rewarding you when you do something right. The course is designed to take about 4 hours and requires only a basic familiarity with R.

This is the first course in a RStudio datacamp track that will cover dplyr, ggvis, rmarkdown, and the RStudio IDE. To enroll, visit the datacamp dplyr portal.

ShinyApps.io dashboard

Some of the most innovative Shiny apps share data across user sessions. Some apps share the results of one session to use in future sessions, others track user characteristics over time and make them available as part of the app.

This level of sophistication creates tricky design choices when you host your app on a server. A nimble server will open new instances of your app to speed up performance, or relaunch your app on a bigger server when it becomes popular. How should you ensure that your app can find and use its data trail along the way?

Shiny Server developer Jeff Allen explains the best ways to share data between sessions in Share data across sessions with ShinyApps.io, a new article at the Shiny Dev Center.

Registration is now open for the next Master R Development workshop led by Hadley Wickham, author of over 30 R packages and the Advanced R book. The workshop will be held on January 19 and 20th in the San Francisco bay area.

The workshop is a two day course on advanced R practices and package development. You’ll learn the three main paradigms of R programming: functional programming, object oriented programming and metaprogramming, as well as how to make R packages, the key to well-documented, well-tested and easily-distributed R code.

To learn more, or register visit rstudio-sfbay.eventbrite.com.

Follow

Get every new post delivered to your Inbox.

Join 19,091 other followers