Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
734 views
in Technique[技术] by (71.8m points)

c - How to compute log with the preprocessor

How do I do log(x) with the preprocessor on windows ?

like:

#define A    log(4)/log(2)

and after in my code the array

int b[A]; // A=2 will be computed with the preprocessor ! not in run time
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Alright, and now for the dirty brute-force preprocessor trickery.

From your question, I assume that what you actually want is not a general logarithm (which isn't even possible in integer arithmetic) but the number of bits needed to represent a given number. If we restrict ourself to 32 bit integers, there is a solution to this, although it's not pretty.

#define IS_REPRESENTIBLE_IN_D_BITS(D, N)                
  (((unsigned long) N >= (1UL << (D - 1)) && (unsigned long) N < (1UL << D)) ? D : -1)

#define BITS_TO_REPRESENT(N)                            
  (N == 0 ? 1 : (31                                     
                 + IS_REPRESENTIBLE_IN_D_BITS( 1, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 2, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 3, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 4, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 5, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 6, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 7, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 8, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS( 9, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(10, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(11, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(12, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(13, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(14, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(15, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(16, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(17, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(18, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(19, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(20, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(21, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(22, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(23, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(24, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(25, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(26, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(27, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(28, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(29, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(30, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(31, N)    
                 + IS_REPRESENTIBLE_IN_D_BITS(32, N)    
                 )                                      
   )

The idea is that a number n > 0 has a representation using exactly d bits if and only if n &geq; 2d−1 and n < 2d. After treating the n = 0 case specially, we simply brute-force this out for all 32 possible answers.

The helper macro IS_REPRESENTIBLE_IN_D_BITS(D, N) will expand to an expression evaluating to D if N can be represented using exactly D bits and to -1 otherwise. I have defined the macros such that the result is −1 if the answer is “no”. To compensate for the negative summands, I add 31 at the end. If the number cannot be represented in any 1, …, 32 bits then the overall result will be −1 which should help us catch some errors.

The expression BITS_TO_REPRESENT(42) is a valid compile-time constant for use in an array length declaration.

All that said, the additional cost for always making your array 32 elements long seems acceptable for many applications and it saves you quite some trouble. So I would only use such trickery if I really had to.

Update: Just to avoid confusion: This solution does not use the preprocessor to evaluate the “logarithm”. All the preprocessor does is performing a text substitution that you can see if compiling with the -E switch (at least for GCC). Let's have a look at this code:

int
main()
{
  int digits[BITS_TO_REPRESENT(42)];
  return 0;
}

It will be preprocessed to (be warned):

int
main()
{
  int digits[(42 == 0 ? 1 : (31 + (((unsigned long) 42 >= (1UL << (1 - 1)) && (unsigned long) 42 < (1UL << 1)) ? 1 : -1) + (((unsigned long) 42 >= (1UL << (2 - 1)) && (unsigned long) 42 < (1UL << 2)) ? 2 : -1) + (((unsigned long) 42 >= (1UL << (3 - 1)) && (unsigned long) 42 < (1UL << 3)) ? 3 : -1) + (((unsigned long) 42 >= (1UL << (4 - 1)) && (unsigned long) 42 < (1UL << 4)) ? 4 : -1) + (((unsigned long) 42 >= (1UL << (5 - 1)) && (unsigned long) 42 < (1UL << 5)) ? 5 : -1) + (((unsigned long) 42 >= (1UL << (6 - 1)) && (unsigned long) 42 < (1UL << 6)) ? 6 : -1) + (((unsigned long) 42 >= (1UL << (7 - 1)) && (unsigned long) 42 < (1UL << 7)) ? 7 : -1) + (((unsigned long) 42 >= (1UL << (8 - 1)) && (unsigned long) 42 < (1UL << 8)) ? 8 : -1) + (((unsigned long) 42 >= (1UL << (9 - 1)) && (unsigned long) 42 < (1UL << 9)) ? 9 : -1) + (((unsigned long) 42 >= (1UL << (10 - 1)) && (unsigned long) 42 < (1UL << 10)) ? 10 : -1) + (((unsigned long) 42 >= (1UL << (11 - 1)) && (unsigned long) 42 < (1UL << 11)) ? 11 : -1) + (((unsigned long) 42 >= (1UL << (12 - 1)) && (unsigned long) 42 < (1UL << 12)) ? 12 : -1) + (((unsigned long) 42 >= (1UL << (13 - 1)) && (unsigned long) 42 < (1UL << 13)) ? 13 : -1) + (((unsigned long) 42 >= (1UL << (14 - 1)) && (unsigned long) 42 < (1UL << 14)) ? 14 : -1) + (((unsigned long) 42 >= (1UL << (15 - 1)) && (unsigned long) 42 < (1UL << 15)) ? 15 : -1) + (((unsigned long) 42 >= (1UL << (16 - 1)) && (unsigned long) 42 < (1UL << 16)) ? 16 : -1) + (((unsigned long) 42 >= (1UL << (17 - 1)) && (unsigned long) 42 < (1UL << 17)) ? 17 : -1) + (((unsigned long) 42 >= (1UL << (18 - 1)) && (unsigned long) 42 < (1UL << 18)) ? 18 : -1) + (((unsigned long) 42 >= (1UL << (19 - 1)) && (unsigned long) 42 < (1UL << 19)) ? 19 : -1) + (((unsigned long) 42 >= (1UL << (20 - 1)) && (unsigned long) 42 < (1UL << 20)) ? 20 : -1) + (((unsigned long) 42 >= (1UL << (21 - 1)) && (unsigned long) 42 < (1UL << 21)) ? 21 : -1) + (((unsigned long) 42 >= (1UL << (22 - 1)) && (unsigned long) 42 < (1UL << 22)) ? 22 : -1) + (((unsigned long) 42 >= (1UL << (23 - 1)) && (unsigned long) 42 < (1UL << 23)) ? 23 : -1) + (((unsigned long) 42 >= (1UL << (24 - 1)) && (unsigned long) 42 < (1UL << 24)) ? 24 : -1) + (((unsigned long) 42 >= (1UL << (25 - 1)) && (unsigned long) 42 < (1UL << 25)) ? 25 : -1) + (((unsigned long) 42 >= (1UL << (26 - 1)) && (unsigned long) 42 < (1UL << 26)) ? 26 : -1) + (((unsigned long) 42 >= (1UL << (27 - 1)) && (unsigned long) 42 < (1UL << 27)) ? 27 : -1) + (((unsigned long) 42 >= (1UL << (28 - 1)) && (unsigned long) 42 < (1UL << 28)) ? 28 : -1) + (((unsigned long) 42 >= (1UL << (29 - 1)) && (unsigned long) 42 < (1UL << 29)) ? 29 : -1) + (((unsigned long) 42 >= (1UL << (30 - 1)) && (unsigned long) 42 < (1UL << 30)) ? 30 : -1) + (((unsigned long) 42 >= (1UL << (31 - 1)) && (unsigned long) 42 < (1UL << 31)) ? 31 : -1) + (((unsigned long) 42 >= (1UL << (32 - 1)) && (unsigned long) 42 < (1UL << 32)) ? 32 : -1) ) )];
  return 0;
}

This looks terrible and if it were evaluated at run-time, it would be quite a number of instructions. However, since all operands are constants (or literals, to be precise) the compiler is able to evaluate this at compile-time. It has to do so, because an array length declaration must be a constant in C 89.

If you are using the macro in other places that are not required to be compile-time constants, it is up to the compiler whether or not it evaluates the expression. However, any reasonable compiler should be expected to perform this rather elementary optimization – known as constant folding – if optimizations are enabled. If in doubt – as always – have a look at the generated assembly code.

For example, let us consider this program.

int
main()
{
  return BITS_TO_REPRESENT(42);
}

The expression in a return statement clearly isn't required to be a compile-time constant, so let's look what code GCC will generate. (I'm using the -S switch to stop at the assembly stage.)

Even without any optimizations enabled, I get the following assembly code which shows that the macro expansion was folded into the constant 6.

main:
.LFB0:
    .cfi_startproc
    pushq   %rbp
    .cfi_def_cfa_offset 16
    .cfi_offset 6, -16
    movq    %rsp, %rbp
    .cfi_def_cfa_register 6
    movl    $6, %eax  # See the constant 6?
    popq    %rbp
    .cfi_def_cfa 7, 8
    ret
    .cfi_endproc

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...