(I want to log the compilation command, its success or failure, the CPU and real time spent to run it, the size of source files ...). The Ctuning project or MILEPOST GCC is an (advanced) example motivating this. A simpler motivation is to make simple statistics on past compilations (e.g. find the ones that ate a source file bigger than 100k, or took more than 5 seconds, e.g. to decide refactoring a big C++ source file into a few smaller ones.).
I didn't found any simple program doing so (on Linux)... If you know one (open source), please tell.However, there could be simpler uses of these. I just coded on my github.com/bstarynk/misc-basile/ repository a C++ program logged-gcc.cc (GPLv3+ licensed) and its build script compile-logged-gcc.sh
Typical use might be to have $HOME/bin/
early in your $PATH
and to add the compiled logged-gcc
executable and logged-g++
symlink (in the same $HOME/bin/
directory) then to have there shell scripts like:
#!/bin/bash
# file ~/bin/gcc which should be executable
export LOGGED_CFLAGS='-g -Wall'
export LOGGED_GCC=/usr/bin/gcc-10
exec $HOME/bin/logged-gcc "$@"
and
#!/bin/bash
# file ~/bin/g+ which should be executable
export LOGGED_CXXFLAGS='-g -Wall'
export LOGGED_GXX=/usr/bin/g++-10
exec $HOME/bin/logged-g++ "$@"
Of course, run
$HOME/bin/logged-gcc --help
to get help, and
$HOME/bin/logged-gcc --version
to get version information.
(Actually, clever $PATH
tricks with symlinks could avoid the above shell scripts)
Then use (on Debian) cat /var/log/messages
to see the logging.
The point is of course not to just redirect a gcc
or g++
command, but to also measure the CPU and elapsed time it did take and the size of input source files to gcc
. This is what I mean by "logging compilation commands" - being able to do some statistics on them later.
I am aware of the -time
option (or -freport-time
) to gcc
. It does not measure size of input source files. I am also aware of GCC plugins machinery (see this draft report and the old GCC MELT project).
I want to log -and somehow benchmark- compilation commands as transparently as possible (e.g. with just a $PATH
trick like above, and later by simply setting environment variables such as LOGGER_SQLITE
in my ~/.zshrc
), and with minor overhead (so forking a few more processes such as logger(1) or du(1) is not an option, since some compilations (e.g. of a small C source code, see my misc-basile repository for examples) can last less than a tenth of a second, see time(7)). Writing a benchmarking GCC plugin may depend on the version of GCC, is probably too heavy and could require e.g. changes in Makefile
-s to invoke that plugin on every gcc
command.
A concrete use case could be the RefPerSys project in a few months (start of 2021). Since it is generating more and more of its C++ code, and compilation timing should drive the decision to generate fewer but larger C++ files, or more but smaller ones, especially when using several standard C++ containers or GUI toolkits such as FLTK in the generated C++ code.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4