TL;DR In C99 or later, the sign of a%b
is the same as a
; earlier than C99 it is implementation-defined.
The standard says the sign of a%b
is such that (a/b)*b + a%b == a
. In other words, a%b == a - (a/b)*b
. Assuming C99, a/b
always rounds towards zero; so a/b
is no larger in magnitude than its correct value. This means |(a/b)*b| <= a
(1).
(a/b)*b
has the same sign as a
, because the sign of b
cancels out. With this and (1), we have that a - (a/b)*b
has the same sign as a
.
Therefore, a%b
has the same sign as a
. It is the difference between a
and the nearest multiple of b
to a
in the direction of 0.
Note that prior to C99, the rounding direction of a/b
was implementation-defined. This means it may be true that |(a/b)*b| > a
, making a - (a/b)*b
and hence a%b
the opposite sign to a
. Hence, prior to C99, the sign of a%b
is implementation-defined.