OT: Requesting C advice

Michael Hennebry hennebry at web.cs.ndsu.nodak.edu
Thu May 24 17:35:26 UTC 2007


On Thu, 24 May 2007, Matthew Saltzman wrote:

> On Wed, 23 May 2007, Les wrote:
>
> > On Wed, 2007-05-23 at 18:45 -0500, Michael Hennebry wrote:
> >
> >> On Wed, 23 May 2007, Mike McCarty wrote:
> >>
> >>> Michael Hennebry wrote:
> >>>> On Wed, 23 May 2007, George Arseneault wrote:

> >>> On big-endian machines, they can. For example, with two's complement
> >>> arithmetic on a big-endian machine,
> >>>
> >>> printf("%d\n",-2);
> >>>
> >>> does not result in
> >>>
> >>> -2
> >>
> >> It should.
> >> printf, declared or not, will look for an int and get it.
> >>
> >> printf("%u\n", -2);
> >> is more interesting.
> >> We might be in the domain of nasal demons.
> >> printf("%u\n", (unsigned)-2);
> >> Is legal, but rather obviously will not print "-2\n".
> >> It will probably print something even regardless of endianness.
>
> It will definitely print *something*.  The question is, can you guarantee
> what it will print.

It is guaranteed to print 2**n-2 for some n>=16 .
Value conversion from signed to unsigned is defined.
Unsigned variable addition and subtraction is modulo 2**n .
An unsigned variable can have values in the range 0..2**n-1
The aforementioned value conversion is done modulo 2**n-1+1=2**n .

> It's not addressed directly in the FAQ, but I believe it's possible to
> prove that (unsigned) -2 must be the two's complement representation of -2
> in however many bits make up an int.  I know there was some controversy
> about that when the standard was being developed.  In any case, I don't
> know of any modern machine that doesn't represent negative integers in
> two's complement.


> >>>> Printing (int)sizeof(typename) will distinguish some types.
> >>>> Note that short, int and long usually only have two distinct sizes.
> >>>> It's allowed, but rare, for all the arithmetic types to have size 1.
>
> Or for them all to have different sizes.

Yup.

> >>> Note that what you suggest works because sizeof(.) for integer
> >>> types is going to be a small number. The only portable means
> >>
> >> For small read <=16.
> >>
> >>> of displaying an unsigned integer of unknown size is
> >>>
> >>> printf("Thing = %ul\n",(unsigned long int)Thing);
> >>>
> >>> For "rare" read "no known implementation". Since long int
> >>> is required to be at least 32 bits, that would require
> >>> that char be at least 32 bits.
> >>
> >> And double has to be more.
>
> How do you get that?  (Not saying you're wrong...)

Sorry about the lack of clarity.
I meant more than 32.

> >> My recollection is that there was a
> >> Cray compiler that had 64-bit chars.
> >> Anyone know for sure?
> >
> > sizeof generally returns size in bytes or words (depending on how the
> > implementer read the spec) I have never seen it return words.
>
> sizeof(char) == 1 is guaranteed by the standard.  There's no reference to
> "bytes", but it is commonly accepted that the char type is a byte.  It's
> possible to have chars that are not eight bits, but I can't think of a
> modern machine that does that.  There were some old machines (Honeywells?)
> that had six-bit bytes and 36-bit words.

Six-bit chars did not satisfy the standard.

> All this is based on my recollection of discussions in comp.lang.c and
> comp.std.c when the standard was under development.
>
> >
> >    And the Cray stored 8 characters in 64 bits using ASCII coding a
> > LONG time ago.  I had forgotten about that.  I think that was the model
> > where you sat on the processing unit when you were at the console.

I just read of a CDC machine that had 60-bit chars.
That is not big enough for doubles though.

It's worth noting that the size of a char
is not just a function of a machine.
It's a function of the compiler.
There was a machine with 10-bit chars.
I suspect that it was the same CDC machine I just mentioned.

-- 
Mike   hennebry at web.cs.ndsu.NoDak.edu
"Horse guts never lie."  -- Cherek Bear-Shoulders




More information about the fedora-list mailing list