Performance

One may wonder how fast is tidyfst. Well, it depends. Generally, it is as fast as data.table because it is backed by it, but it would spend extra time on the generation of data.table codes. This extra time is marginal on large (and even small) data sets.

Now let’s do a test to compare the performance of tidyfst, data.table and dplyr. In the vignette we’ll use a small data set. The example was provided by the data.table package (https://h2oai.github.io/db-benchmark/) and tweaked here. These tests are based on computation by groups.

First let’s load the package and generate some data.

# load packages
library(tidyfst)
#> Thank you for using tidyfst!
#> To acknowledge our work, please cite the package:
#> Huang et al., (2020). tidyfst: Tidy Verbs for Fast Data Manipulation. Journal of Open Source Software, 5(52), 2388, https://doi.org/10.21105/joss.02388
library(data.table)
library(dplyr)
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:data.table':
#> 
#>     between, first, last
#> The following objects are masked from 'package:tidyfst':
#> 
#>     between, cummean, nth
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union
library(bench)

# generate the data
# if you have a HPC and want to try larger data sets, increase N
N = 1e4 
K = 1e2

set.seed(2020)

cat(sprintf("Producing data of %s rows and %s K groups factors\n", N, K))
#> Producing data of 10000 rows and 100 K groups factors

DT = data.table(

  id1 = sample(sprintf("id%03d",1:K), N, TRUE),      # large groups (char)

  id2 = sample(sprintf("id%03d",1:K), N, TRUE),      # large groups (char)

  id3 = sample(sprintf("id%010d",1:(N/K)), N, TRUE), # small groups (char)

  id4 = sample(K, N, TRUE),                          # large groups (int)

  id5 = sample(K, N, TRUE),                          # large groups (int)

  id6 = sample(N/K, N, TRUE),                        # small groups (int)

  v1 =  sample(5, N, TRUE),                          # int in range [1,5]

  v2 =  sample(5, N, TRUE),                          # int in range [1,5]

  v3 =  round(runif(N,max=100),4)                    # numeric e.g. 23.5749

)

object_size(DT)
#> 527.7 Kb

This data is rather small, the size is around 527 Kb. However, with the bench package, we could detect the difference by increasing iteration times. In this way, examples listed here could be implemented even on relatively low performance computers.

Q1

Here, we try to get median and standard deviation by groups.After dplyr v1.0.0, the regrouping feature could be confusing sometimes (comes with warning message). If you are using it, make sure they are in the right groups before grouped computation. In tidyfst and data.table, we have “by” parameter to specify the groups. Here we would not check if the results are equal, because dplyr will return a tibble class even when we input a data.table in the first place. The iteration time is 10 for each of the test below.

bench::mark(
  data.table = DT[,.(median_v3 = median(v3),
                     sd_v3 = sd(v3)),
                  by = .(id4,id5)],
  tidyfst = DT %>%
    summarise_dt(
      by = c("id4", "id5"),
      median_v3 = median(v3),
      sd_v3 = sd(v3)
    ),
  dplyr = DT %>%
    group_by(id4,id5,.drop = TRUE) %>%
    summarise(median_v3 = median(v3),sd_v3 = sd(v3)),
  check = FALSE,iterations = 10
) -> q1
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.
#> `summarise()` has grouped output by 'id4'. You can override using the `.groups`
#> argument.

q1
#> # A tibble: 3 × 6
#>   expression      min   median `itr/sec` mem_alloc `gc/sec`
#>   <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
#> 1 data.table   1.65ms    1.7ms    578.      2.37MB      0  
#> 2 tidyfst      1.66ms    1.7ms    587.    656.64KB      0  
#> 3 dplyr      204.27ms  211.9ms      4.70    5.41MB     15.5

We could find that spent time of tidyfst and data.table are quite similar, but much less than dplyr.

Q2

This example performs quite similar to the above one. tidyfst might spend a tiny little more time and space on code translation than data.table, but still performs much better than dplyr.

bench::mark(
  data.table =DT[,.(range_v1_v2 = max(v1) - min(v2)),by = id3],
  tidyfst = DT %>% summarise_dt(
    by = id3,
    range_v1_v2 = max(v1) - min(v2)
  ),
  dplyr = DT %>%
    group_by(id3,.drop = TRUE) %>%
    summarise(range_v1_v2 = max(v1) - min(v2)),
  check = FALSE,iterations = 10
) -> q2

q2
#> # A tibble: 3 × 6
#>   expression      min   median `itr/sec` mem_alloc `gc/sec`
#>   <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
#> 1 data.table 617.42µs 651.84µs     1461.    92.9KB     162.
#> 2 tidyfst    617.97µs 657.59µs     1517.    92.9KB       0 
#> 3 dplyr        2.71ms   2.76ms      355.   468.7KB       0

Q3

Here we’ll display a rather different test to show the flexibly in tidyfst. In tidyfst, if your code writes more like data.table, the codes could speed up. If you write it more like dplyr, the codes might be more readable but slows down. In tidyfst, there is in_dt function for you to write data.table codes to gain speed when you meet a bottomneck.

In the following example, we use the exact same syntax of data.table in tidyfst::in_dt.

bench::mark(
  data.table =DT[order(-v3),.(largest2_v3 = head(v3,2L)),by = id6],
  tidyfst = DT %>%
    in_dt(order(-v3),.(largest2_v3 = head(v3,2L)),by = id6),
  dplyr = DT %>%
    select(id6,largest2_v3 = v3) %>%
    group_by(id6) %>%
    slice_max(largest2_v3,n = 2,with_ties = FALSE),
  check = FALSE,iterations = 10
) -> q3

q3
#> # A tibble: 3 × 6
#>   expression      min   median `itr/sec` mem_alloc `gc/sec`
#>   <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
#> 1 data.table   1.75ms   1.82ms     537.   607.77KB      0  
#> 2 tidyfst      1.97ms   2.22ms     461.     1.03MB     51.2
#> 3 dplyr       16.64ms  17.07ms      57.8    2.01MB     14.4

Q4

To summarise multiple columns by group, tidyfst has designed a function named summarise_vars, which is even more convenient than the across function in dplyr. It first choose the columns, then tell it what to do, and you can provide the “by” parameter to operate by groups (optional).

bench::mark(
  data.table =DT[,lapply(.SD,mean),by = id4,.SDcols = v1:v3],
  tidyfst = DT %>%
    summarise_vars(
      v1:v3,
      mean,
      by = id4
    ),
  dplyr = DT %>%
    group_by(id4) %>%
    summarise(across(v1:v3,mean)),
  check = FALSE,iterations = 10
) -> q4

q4
#> # A tibble: 3 × 6
#>   expression      min   median `itr/sec` mem_alloc `gc/sec`
#>   <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
#> 1 data.table 779.25µs 847.93µs     1160.     489KB      0  
#> 2 tidyfst       2.8ms   3.04ms      331.     319KB      0  
#> 3 dplyr        4.77ms   5.11ms      190.     551KB     21.1

Take a look at the performance, tidyfst still lies between data.table and dplyr.

Q5

Now let’s try more groups, here we use all the id (id1~id6) as group, and get the sum and count. Note that tidyfst is written in data.table, so it do not use n() in dplyr but .N in data.table to get counts by group.

bench::mark(
  data.table =DT[,.(v3 = sum(v3),count = .N),by = id1:id6],
  tidyfst = DT %>%
    summarise_dt(
      by = id1:id6,
      v3 = sum(v3),
      count = .N
    ),
  dplyr = DT %>%
    group_by(id1,id2,id3,id4,id5,id6) %>%
    summarise(v3 = sum(v3),count = n()),
  check = FALSE,iterations = 10
) -> q5
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.
#> `summarise()` has grouped output by 'id1', 'id2', 'id3', 'id4', 'id5'. You can
#> override using the `.groups` argument.

q5
#> # A tibble: 3 × 6
#>   expression      min   median `itr/sec` mem_alloc `gc/sec`
#>   <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
#> 1 data.table   2.16ms   2.31ms    427.      1.02MB      0  
#> 2 tidyfst      2.49ms   2.52ms    395.      1.02MB      0  
#> 3 dplyr       85.71ms  91.38ms      9.89       3MB     16.8

Last words

While in a data set of ~0.5 Mb we find that the performance of tidyfst lies between data.table and dplyr, we could discover that the speed is much closer to data.table. In fact, if you try a much larger data set in a computer with large RAM and multiple cores, you’ll find that the performance of tidyfst sticks close to data.table. If you are interested and has a high-performance computer, try to generate a larger data set and test out. Moreover, while the dplyr user might find these data manipulation verbs friendly, the innate syntax of tidyfst is more like data.table, and could be a good companion of data.table for some frequently used complex tasks.

Session information

sessionInfo()
#> R version 4.4.2 (2024-10-31)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#> 
#> Matrix products: default
#> BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0
#> 
#> locale:
#>  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
#>  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
#>  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
#>  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
#>  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
#> 
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] bench_1.1.3       dplyr_1.1.4       data.table_1.16.2 tidyfst_1.8.1    
#> [5] rmarkdown_2.29   
#> 
#> loaded via a namespace (and not attached):
#>  [1] jsonlite_1.8.9    compiler_4.4.2    tidyselect_1.2.1  Rcpp_1.0.13-1    
#>  [5] stringr_1.5.1     parallel_4.4.2    jquerylib_0.1.4   yaml_2.3.10      
#>  [9] fastmap_1.2.0     R6_2.5.1          generics_0.1.3    knitr_1.49       
#> [13] tibble_3.2.1      maketools_1.3.1   bslib_0.8.0       pillar_1.9.0     
#> [17] rlang_1.1.4       utf8_1.2.4        cachem_1.1.0      stringi_1.8.4    
#> [21] xfun_0.49         sass_0.4.9        sys_3.4.3         cli_3.6.3        
#> [25] withr_3.0.2       magrittr_2.0.3    digest_0.6.37     fst_0.9.8        
#> [29] lifecycle_1.0.4   vctrs_0.6.5       evaluate_1.0.1    glue_1.8.0       
#> [33] buildtools_1.0.0  profmem_0.6.0     fansi_1.0.6       fstcore_0.9.18   
#> [37] tools_4.4.2       pkgconfig_2.0.3   htmltools_0.5.8.1