add tag
निरंजन
If I want to know the exact statistics of the performance of two approaches of doing something in LaTeX, I have heard that there is a benchmarking tool available in LaTeX, namely `l3benchmark`. I looked up it's documentation, but I found it too technical. Can I use it for a simple comparison like the following two and get the stats?

1. ```
   \documentclass{article}

   \begin{document}
   \newcount\mycnt
   \mycnt=1\relax
   \the\mycnt
   \end{document}
   ```

1. ```
   \documentclass{article}

   \begin{document}
   \newcounter{mycnt}
   \setcounter{mycnt}{1}
   \the\mycnt
   \end{document}
   ```
Top Answer
Skillmon
I usually don't care for the "general" stats, but only for their comparative behaviour.

For that I wrote some relatively simple code to directly compare multiple approaches (and possibly different arguments) based on code Bruno Le Floch wrote somewhere somewhen (I don't remember where exactly).

The syntax is:

```
\Compare <macro> { <clist-of-macros> } { <clist-of-args> }
```

The `<macro>` is compared to each macro in `<clist-of-macros>`, each getting every argument-list in the comma separated `<clist-of-args>`. The parsing of `<clist-of-args>` isn't very nice, but suffices for me, in particular you can lose braces to it, since it is parsed as a comma separated list (that step looses one set of outer braces). To give an example:

```
\Compare \foo \bar {{a}{b},{c}{d}}
```

Would compare the time needed for `\foo{a}{b}\foo{c}{d}` with the time needed for `\bar{a}{b}\bar{c}{d}`. But if you want to give a single empty argument you have to use:

```
\Compare \foo \bar {{{}}}
```

(one set of braces for the top-level argument, one is lost to `clist`-parsing, one remains, so this is `\foo{}` vs. `\bar{}`)

It outputs its result to the terminal in the format

```none
<macro-list-element> / <macro>
<float>
<float>
<float>
```

This means that on 3 runs, the `<macro-list-element>` took `<float>` times longer than `<macro>` (obviously, if `<float>` is smaller than 1 this means that it was faster than `<macro>`).

My entire code (note that I have no idea how well maintained this is, and I don't know whether any of the other document commands apart from `\Compare` work, I only use that one, and for that it still works, as far as I can tell):

```
%% Based on code from Bruno Le Floch

\RequirePackage{xparse}
\RequirePackage{l3benchmark}
\ExplSyntaxOn
\clist_new:N \l__cbench_times_clist
\clist_new:N \l__cbench_base_clist
\clist_new:N \l_cbench_macro_clist
\clist_new:N \l_cbench_args_clist
\bool_new:N \l_cbench_sort_bool
\int_new:N \l__cbench_item_int
\tl_new:N \l__cbench_pre_args_tl
\tl_new:N \l_cbench_macro_tl

\cs_set_protected:Npn \__cbench_benchmark_display:
  {
    \fp_gset:Nn \g_benchmark_ops_fp
      { \g_benchmark_time_fp / \g__benchmark_one_op_fp }
    \clist_put_right:Nx \l__cbench_times_clist
      { \fp_to_decimal:N \g_benchmark_time_fp }
  }

\NewDocumentCommand \Compare { m m O{3} +m }
  {
    \group_begin:
      \tl_set:Nn \l_cbench_macro_tl { #1 }
      \clist_set:Nn \l_cbench_macro_clist { #2 }
      \clist_set:Nn \l_cbench_args_clist { #4 }
      \cbench_run_comparison:n { #3 }
    \group_end:
  }

\NewDocumentCommand \CompareMacros { m m }
  {
    \tl_set:Nn \l_cbench_macro_tl { #1 }
    \clist_set:Nn \l_cbench_macro_clist { #2 }
  }

\NewDocumentCommand \MacroArguments { +m }
  {
    \clist_set:Nn \l_cbench_args_clist { #1 }
  }

\NewDocumentCommand \RunComparison { O{3} }
  {
    \cbench_run_comparison:n { #1 }
  }

\cs_new_protected:Npn \cbench_run_comparison:n #1
  {
    \group_begin:
      \cs_set_eq:NN \__benchmark_display: \__cbench_benchmark_display:
      \__cbench_set_benchmark_tl:V \l_cbench_macro_tl
      \__cbench_list_from_benchmark:NnV
        \l__cbench_base_clist { #1 } \l__cbench_benchmark_tl
      \clist_map_inline:Nn \l_cbench_macro_clist
        {
          \__cbench_set_benchmark_tl:n { ##1 }
          \__cbench_benchmark:nV { #1 } \l__cbench_benchmark_tl
          \iow_term:x { \exp_not:n { ##1 /~} \exp_not:V \l_cbench_macro_tl }
          \int_step_inline:nn { #1 }
            {
              \iow_term:x
                {
                  \fp_eval:n
                    {
                      round
                        (
                          \clist_item:Nn \l__cbench_times_clist { #1 }
                          / \clist_item:Nn \l__cbench_base_clist { #1 }
                          , 3
                        )
                    }
                }
            }
        }
    \group_end:
  }

\cs_new_protected:Npn \cbench_benchmark:Nnn #1 #2 #3
  {
    \group_begin:
      \cs_set_eq:NN \__benchmark_display: \__cbench_benchmark_display:
      \__cbench_benchmark:nn { #2 } { #3 }
      \exp_args:NNNx
    \group_end:
    \clist_set:Nn #1 { \clist_use:Nn \l__cbench_times_clist { , } }
  }

\cs_new_protected:Npn \__cbench_benchmark:nn #1 #2
  {
    \clist_clear:N \l__cbench_times_clist
    \prg_replicate:nn { #1 } { \benchmark:n { #2 } }
    \bool_if:NT \l_cbench_sort_bool
      {
        \clist_sort:Nn \l__cbench_times_clist
          {
            \fp_compare:nNnTF { ##1 } > { ##2 }
              \sort_return_swapped:
              \sort_return_same:
          }
      }
  }
\cs_generate_variant:Nn \__cbench_benchmark:nn { nV }

\cs_new_protected:Npn \__cbench_list_from_benchmark:Nnn #1 #2 #3
  {
    \__cbench_benchmark:nn { #2 } { #3 }
    \clist_set_eq:NN #1 \l__cbench_times_clist
  }
\cs_generate_variant:Nn \__cbench_list_from_benchmark:Nnn { NnV }

\cs_new_protected:Npn \__cbench_set_benchmark_tl:n #1
  {
    \clist_if_empty:NTF \l_cbench_args_clist
      {
        \tl_set:Nn \l__cbench_benchmark_tl { #1 }
      }
      {
        \tl_clear:N \l__cbench_benchmark_tl
        \tl_set:Nn \l__cbench_pre_args_tl { #1 }
        \clist_map_function:NN
          \l_cbench_args_clist \__cbench_set_benchmark_tl_aux:n
      }
  }
\cs_new_protected:Npn \__cbench_set_benchmark_tl_aux:n #1
  {
    \tl_put_right:Nx \l__cbench_benchmark_tl
      { \exp_not:o { \l__cbench_pre_args_tl #1 } }
  }
\cs_generate_variant:Nn \__cbench_set_benchmark_tl:n { V }

\ExplSyntaxOff
```

Usage example (of stuff I just compared myself):

```
\documentclass{article}

\input{comparing_benchmark.tex}

\ExplSyntaxOn
\makeatletter

\prg_new_protected_conditional:Npnn \my_if_head_is_N_type:n #1 { TF }
  {
    \if:w
        0
        \__str_if_eq:nn
          { \__kernel_exp_not:w {#1} {} }
          { \__kernel_exp_not:w \exp_after:wN { \use:n #1 {} } }
      \prg_return_true:
    \else:
      \prg_return_false:
    \fi:
  }

\typeout{^^Jempty}
\Compare \my_if_head_is_N_type:nTF \tl_if_head_is_N_type:nTF {{}{}{}}
\Compare \tl_if_head_is_N_type:nTF \my_if_head_is_N_type:nTF {{}{}{}}

\typeout{^^Jspace}
\Compare \my_if_head_is_N_type:nTF \tl_if_head_is_N_type:nTF {{~}{}{}}
\Compare \tl_if_head_is_N_type:nTF \my_if_head_is_N_type:nTF {{~}{}{}}

\typeout{^^Jgroup}
\Compare \my_if_head_is_N_type:nTF \tl_if_head_is_N_type:nTF {{{}}{}{}}
\Compare \tl_if_head_is_N_type:nTF \my_if_head_is_N_type:nTF {{{}}{}{}}

\typeout{^^JN-type}
\Compare \my_if_head_is_N_type:nTF \tl_if_head_is_N_type:nTF {{a}{}{}}
\Compare \tl_if_head_is_N_type:nTF \my_if_head_is_N_type:nTF {{a}{}{}}
\stop
```

This prints to the terminal/log:

```none
empty
\tl_if_head_is_N_type:nTF / \my_if_head_is_N_type:nTF
1.705
1.705
1.705
\my_if_head_is_N_type:nTF / \tl_if_head_is_N_type:nTF
0.581
0.581
0.581

space
\tl_if_head_is_N_type:nTF / \my_if_head_is_N_type:nTF
1.646
1.646
1.646
\my_if_head_is_N_type:nTF / \tl_if_head_is_N_type:nTF
0.612
0.612
0.612

group
\tl_if_head_is_N_type:nTF / \my_if_head_is_N_type:nTF
1.778
1.778
1.778
\my_if_head_is_N_type:nTF / \tl_if_head_is_N_type:nTF
0.562
0.562
0.562

N-type
\tl_if_head_is_N_type:nTF / \my_if_head_is_N_type:nTF
1.685
1.685
1.685
\my_if_head_is_N_type:nTF / \tl_if_head_is_N_type:nTF
0.603
0.603
0.603
```

So `\my_if_head_is_N_type:nTF` performed ca. 40% better on the tested arguments.

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.