Next: Self-Referential Macros, Previous: Swallowing the Semicolon, Up: Macro Pitfalls [Contents][Index]
Many C programs define a macro min
, for “minimum”, like this:
#define min(X, Y) ((X) < (Y) ? (X) : (Y))
When you use this macro with an argument containing a side effect, as shown here,
next = min (x + y, foo (z));
it expands as follows:
next = ((x + y) < (foo (z)) ? (x + y) : (foo (z)));
where x + y
has been substituted for X
and foo (z)
for Y
.
The function foo
is used only once in the statement as it appears
in the program, but the expression foo (z)
has been substituted
twice into the macro expansion. As a result, foo
might be called
two times when the statement is executed. If it has side effects or if
it takes a long time to compute, the results might not be what you
intended. We say that min
is an unsafe macro.
The best solution to this problem is to define min
in a way that
computes the value of foo (z)
only once. The C language offers
no standard way to do this, but it can be done with GNU extensions as
follows:
#define min(X, Y) \ ({ typeof (X) x_ = (X); \ typeof (Y) y_ = (Y); \ (x_ < y_) ? x_ : y_; })
The ‘({ … })’ notation produces a compound statement that acts as an expression. Its value is the value of its last statement. This permits us to define local variables and assign each argument to one. The local variables have underscores after their names to reduce the risk of conflict with an identifier of wider scope (it is impossible to avoid this entirely). Now each argument is evaluated exactly once.
If you do not wish to use GNU C extensions, the only solution is to be
careful when using the macro min
. For example, you can
calculate the value of foo (z)
, save it in a variable, and use
that variable in min
:
#define min(X, Y) ((X) < (Y) ? (X) : (Y)) … { int tem = foo (z); next = min (x + y, tem); }
(where we assume that foo
returns type int
).
Next: Self-Referential Macros, Previous: Swallowing the Semicolon, Up: Macro Pitfalls [Contents][Index]