FAQ
Is there a function that takes a number with binary numeral a1...an to the
number with binary numeral b1...bn, where each bi is 1 if ai is 0, and vice
versa? (For example, the function's value at 18 [binary 10010] would be 13
[binary 01101].) I thought this was what the tilde operator (~) did, but when I
went to try it I found out that wasn't the case. I discovered by experiment (and
verified by looking at the documentation) that the tilde operator takes n
to -(n+1). I can't imagine what that has to do with binary numerals. Can anyone
shed some light on that? (In case you're curious, I'm writing a script that will
play Nim, just as a way of familiarizing myself with bitwise operators. Good
thing, too: I thought I understood them, but apparently I don't.)

Muchas gracias for any and all helps and hints.

Peace,
EJ

Search Discussions

  • OKB (not okblacke) at Aug 17, 2003 at 5:28 am

    Elaine Jackson wrote:

    Is there a function that takes a number with binary numeral a1...an
    to the number with binary numeral b1...bn, where each bi is 1 if ai
    is 0, and vice versa? (For example, the function's value at 18
    [binary 10010] would be 13 [binary 01101].) I thought this was what
    the tilde operator (~) did, but when I went to try it I found out
    that wasn't the case. I discovered by experiment (and verified by
    looking at the documentation) that the tilde operator takes n to
    -(n+1). I can't imagine what that has to do with binary numerals.
    Can anyone shed some light on that? (In case you're curious, I'm
    writing a script that will play Nim, just as a way of familiarizing
    myself with bitwise operators. Good thing, too: I thought I
    understood them, but apparently I don't.)
    I think this is because it's also inverting the leading zero bits.
    That is, 18 isn't really 10010, it's 000...0010010 with a bunch of 0s
    at the front. (I'm not sure, but I think the number of bits in the
    integer type can change with different implementations.) So when you
    bitwise-not it, you get 111...1101101, which is interpreted as -19.

    --
    --OKB (not okblacke)
    "Do not follow where the path may lead. Go, instead, where there is
    no path, and leave a trail."
    --author unknown
  • Graham Fawcett at Aug 17, 2003 at 5:35 am

    Elaine Jackson wrote:
    Is there a function that takes a number with binary numeral a1...an to the
    number with binary numeral b1...bn, where each bi is 1 if ai is 0, and vice
    versa? (For example, the function's value at 18 [binary 10010] would be 13
    [binary 01101].) I thought this was what the tilde operator (~) did, but when I
    went to try it I found out that wasn't the case. I discovered by experiment (and
    verified by looking at the documentation) that the tilde operator takes n
    to -(n+1). I can't imagine what that has to do with binary numerals.
    It has a lot to do with binary! Google for "two's complement".

    In the meantime, try this:
    ~18 & 31
    13

    The '~' operator cannot care about precision -- that is, how many bits
    you're operating on, or expecting in your result. In your example, you
    represent decimal 18 as '10010', but '000000010010' is also correct,
    right?

    In two's complement math, both inverses, '01101' and '111111101101'
    respectively, are equivalent to decimal -19.

    And-ing with a mask that is the of length 'n' will ensure that you only
    get the least significant n bits -- and this is what you're looking for.
    Since you're operating on five bits in your example, I chose decimal 31,
    or '11111'.

    -- Graham
  • Tim Peters at Aug 17, 2003 at 5:35 am
    [Elaine Jackson]
    Is there a function that takes a number with binary numeral a1...an
    to the number with binary numeral b1...bn, where each bi is 1 if ai
    is 0, and vice versa? (For example, the function's value at 18
    [binary 10010] would be 13 [binary 01101].) I thought this was what
    the tilde operator (~) did,
    Surprise: that is what ~ does.
    but when I went to try it I found out that wasn't the case.
    Please give a specific example. To understand your example above, note that
    binary 10010 actually has an unbounded number of 0 bits "to the left":

    ...00000010010 = 13

    The bitwise inversion of that therefore has an unbounded number of 1 bits
    "to the left":

    ...11111101101 = -19

    This is why you get -19:
    n = 18
    ~n
    -19
    >>>

    Any answer other than that is wishing away the 0 bits at the left in the
    input. Python can't *guess* how many bits you want to keep. If you only
    want to keep the rightmost 5 bits, then explicitly mask off the rightmost 5
    bits:
    (~18) & 0x1f
    13
    >>>

    So that's the 13 "you expected".
    I discovered by experiment (and verified by looking at the
    documentation) that the tilde operator takes n to -(n+1). I can't
    imagine what that has to do with binary numerals. Can anyone
    shed some light on that?
    Python represents negative integers in unbounded 2's-complement form (well,
    it doesn't really under the covers, but it maintains the illusion of doing
    so in all arithmetic and logical operators). The 2's-complement of an
    integer is equal to 1 plus its 1's-complement form, and 1's-complement is
    identical to bitwise inversion. So

    -n = 1 + (~n)

    Then

    ~n = -(n+1)

    follows from that via rearrangement.
    (In case you're curious, I'm writing a script that will play Nim,
    just as a way of familiarizing myself with bitwise operators. Good
    thing, too: I thought I understood them, but apparently I don't.)
    You're only overlooking the consequences of an infinite amount of
    information <wink>.
  • Graham Fawcett at Aug 17, 2003 at 6:14 am

    Tim Peters wrote:

    (In case you're curious, I'm writing a script that will play Nim,
    just as a way of familiarizing myself with bitwise operators. Good
    thing, too: I thought I understood them, but apparently I don't.)
    You're only overlooking the consequences of an infinite amount of
    information <wink>.
    Excellent line! I may borrow that one, Tim; I'll use it on the boss,
    next time that he asks why my application doesn't work the way he
    expected it to. Or why we ran out of coffee, or why the printer isn't
    working... It may need to be accompanied by a smoldering, wide-eyed
    Rasputin-ish stare, however, or it might lack the necessary mystique to
    supress any further questions.

    Of course, after a good night's sleep, maybe it won't seem like such a
    good idea. ;-)

    nocturnally yours,

    -- Graham
  • Tim Lesher at Aug 17, 2003 at 8:00 pm
    Graham Fawcett <fawcett at teksavvy.com> wrote in message news:<mailman.1061100930.7013.python-list at python.org>...
    Tim Peters wrote:
    You're only overlooking the consequences of an infinite amount of
    information <wink>.
    Excellent line! [...]
    It may need to be accompanied by a smoldering, wide-eyed
    Rasputin-ish stare, however, or it might lack the necessary mystique to
    supress any further questions.
    Somehow, I associated it with a smoldering, wide-eyed Tom Baker-ish
    stare. To each his own.

    --
    Tim Lesher
    tim at lesher.ws
  • Michael Peuser at Aug 17, 2003 at 7:01 am
    "Elaine Jackson" <elainejackson7355 at home.com> schrieb im Newsbeitrag
    news:1YD%a.747879$3C2.17342633 at news3.calgary.shaw.ca...


    ...
    I'm writing a script that will
    play Nim, just as a way of familiarizing myself with bitwise operators. Good
    thing, too: I thought I understood them, but apparently I don't.)
    Hi Elaine,
    You have described a general misconception you seem to be not the only one
    to live with.
    The enlightening answers having been posted might sufficwe, but a schuld
    liek to add some more "enlightenment":
    Bit complements have a lot to do with set complents and the aritmetic
    negation (sometimes called two's complement for obvious reasons). Consider
    the set of of "red" of "blue". Now whats the complement? "green" and
    "yellow" is obviously the wrong answer. You in fact cannot give any answer
    befor you define the total set you are dealing with. The same applies to
    logical bit operations. Generally you take a "processor" word or a part of
    it to be defined. Some high level languages are more flexible; and even some
    computers ("vector processors") are.

    The only rule is, that ~(~x) == x

    The same situation with numbers: What is the negation of +5. You have to
    think very hard! This is a trick question and you probably will give a
    "trick answer": -5. You should be aware that this is just a trick. "-5"
    contains no other information as that it is some "complement" of 5. (same
    with complex "imaginary" numbers: 5j (in Python) just says it is some fancy
    5.)

    Now we define a transformation between positive numbers and bit patterns 5 LoL. Note that 5 == ...000005 or LoL == ....ooooLoL does not help any
    understanding so you generally skip this part.

    Now you do some arithmetic "inversion": 5 -> -5 This however can (and
    should) stay a secret of the processor! By no means should you be interested
    in how the machine represents "-5". If you are courious then know that
    there had been times when computers represente -5 as ...LLLoLo. Yes it
    worked! And you had two diffrent "zeros" then: +0 and -0 !!!!

    Most computers do not distinguish between the representation of negativ
    numbers and complemented sets (let alone note a special "total set" the
    complemt was referring to). Thus the "secret" of modern two's-complement
    computern arithmetic is always disclosed to you.

    Note that there is no use in something like "masking" the MSB, i.e. that
    bits-complements only work on 31 bits. This will lead to ~5 == ~LoL == ~
    LL..LLLLoL == 2,147,483,643 Not much improvement, eh!?

    Kindly
    Michael P
  • Elaine Jackson at Aug 17, 2003 at 1:05 pm
    "Tim Peters" <tim.one at comcast.net> wrote in message
    news:mailman.1061098645.13097.python-list at python.org...

    <snip>
    To understand your example above, note that
    binary 10010 actually has an unbounded number of 0 bits "to the left":

    ...00000010010 = 13

    The bitwise inversion of that therefore has an unbounded number of 1 bits
    "to the left":

    ...11111101101 = -19
    ** What you're saying, the way I read it, is that 5+S=(-19), where S is the
    (divergent) sum of all the powers 2^k of 2 with k>=5. I'm still at sea, in other
    words.

    <snip>
    Python can't *guess* how many bits you want to keep.
    ** But it could if someone had told it that the leftmost nonzero digit is the
    place to start. I just assumed somebody had told it that.

    <snip>
    Python represents negative integers in unbounded 2's-complement form
    ** Now we're getting someplace. That was the missing piece in my puzzle.

    Thanks for the help.

    Peace,
    EJ
  • Michael Peuser at Aug 17, 2003 at 1:54 pm
    "Elaine Jackson" <elainejackson7355 at home.com> schrieb im Newsbeitrag
    news:d4L%a.781431$Vi5.17573533 at news1.calgary.shaw.ca...
    "Tim Peters" <tim.one at comcast.net> wrote in message
    news:mailman.1061098645.13097.python-list at python.org...

    <snip>
    To understand your example above, note that
    binary 10010 actually has an unbounded number of 0 bits "to the left":

    ...00000010010 = 13
    .. should read 18 BTW
    The bitwise inversion of that therefore has an unbounded number of 1
    bits
    "to the left":

    ...11111101101 = -19
    ** What you're saying, the way I read it, is that 5+S=(-19), where S is the
    (divergent) sum of all the powers 2^k of 2 with k>=5. I'm still at sea, in other
    words.
    I think not. Tim seems to point out that there is no natural final
    "representation" of negative numbers unless we invent some convention. The
    interpretation of 1101101 or 11111101101 or ......1111101101 as -19 is a
    convention and has nothing to do with the more natural interpretaton of
    101101 as 18. The main reason is that binary adders used in computers are
    very simple too be (ab-) used for subtraction in that way....

    Kindly
    Michael P
  • Bengt Richter at Aug 17, 2003 at 6:03 pm

    On Sun, 17 Aug 2003 13:05:13 GMT, "Elaine Jackson" wrote:
    "Tim Peters" <tim.one at comcast.net> wrote in message
    news:mailman.1061098645.13097.python-list at python.org...

    <snip>
    To understand your example above, note that
    binary 10010 actually has an unbounded number of 0 bits "to the left":

    ...00000010010 = 13

    The bitwise inversion of that therefore has an unbounded number of 1 bits
    "to the left":

    ...11111101101 = -19
    ** What you're saying, the way I read it, is that 5+S=(-19), where S is the
    (divergent) sum of all the powers 2^k of 2 with k>=5. I'm still at sea, in other
    words.
    Since all the sign bits extend infinitely, we can chop them off any place above
    the first of the repeating bits to get N bits with some number of repetitions at the
    top. We will see that the interpreted numeric value does not depend on how far
    to the left we chop the bits, so long as we keep at least the first of
    the repeating bits.

    This representation of a signed integer with N total bits b(i) little-endianly numbered
    with i going 0 to N-1 and each bit having a value b(i) of 0 or 1 is interpreted as having
    the (N-1)th bit as the sign bit, and the rest positive, i.e.,

    ___ i=N-2
    \
    a = -b(N-1)*2**(N-1)i + > b(i)*2**i
    /__
    i=0

    or in python, operating on a little-ending list of N bits,
    def bitsvalue(b): #b is bitlist, to follow convention above
    ... N=len(b)
    ... sum = -b[N-1]*2**(N-1)
    ... for i in xrange(N-1):
    ... sum += b[i]*2**i
    ... return sum
    ...

    (Remember this is little-endian: least significant bit to the left, sign at the right)
    bitsvalue([0,1,0,0,1,0])
    18
    bitsvalue([1,0,1,1,0,1])
    -19

    It doesn't matter how far you extend a copy of the top bit, the value remains the same:
    bitsvalue([1,0,1,1,0,1,1,1,1,1,1,1])
    -19

    You can verify it from the math of the sum, if you separate it into two sums, one with
    the top repeating sign bits b(N-r) to b(N-1) and the other with the rest, which has the
    same constant value. I'll leave it as an exercise.
    bitsvalue([1,1,1,1,1,1])
    -1
    bitsvalue([1,1,1,1])
    -1
    bitsvalue([1,1,1,0])
    7
    <snip>
    Python can't *guess* how many bits you want to keep.
    ** But it could if someone had told it that the leftmost nonzero digit is the
    place to start. I just assumed somebody had told it that.
    Or the leftmost bit that is either the only bit of all or different
    from the one to its right is the minimum sign bit to copy arbitrarily leftwards.
    <snip>
    Python represents negative integers in unbounded 2's-complement form
    ** Now we're getting someplace. That was the missing piece in my puzzle.
    Ok, I guess you don't need the demo prog after all ;-)

    Regards,
    Bengt Richter
  • Carl Banks at Aug 17, 2003 at 10:08 pm

    Elaine Jackson wrote:
    <snip>
    Python can't *guess* how many bits you want to keep.
    ** But it could if someone had told it that the leftmost nonzero
    digit is the place to start. I just assumed somebody had told it
    that.

    And if someone had done that, it would violate the invariant:

    ~(~x) == x

    In fact, by repeatedly applying ~ you'd eventually zero all the bits.



    --
    CARL BANKS http://www.aerojockey.com/software
    "You don't run Microsoft Windows. Microsoft Windows runs you."
  • Irmen de Jong at Aug 17, 2003 at 3:21 pm
    While others explained how the ~ operator works, let me suggest
    another possibility: the bitwise exclusive or.
    def bin(i):
    ... l = ['0000', '0001', '0010', '0011', '0100', '0101', '0110', '0111',
    ... '1000', '1001', '1010', '1011', '1100', '1101', '1110', '1111']
    ... s = ''.join(map(lambda x, l=l: l[int(x, 16)], hex(i)[2:]))
    ... if s[0] == '1' and i > 0:
    ... s = '0000' + s
    ... return s
    ...
    bin(18)
    '00010010'
    ~18
    -19
    bin(~18) # tricky...
    '11111111111111111111111111101101'
    ~18 & 0x1f
    13
    bin(~18 & 0x1f)
    '00001101'
    18 ^ 0x1f # XOR!
    13
    bin(18 ^ 0x1f) # XOR
    '00001101'
    >>>


    You still have to think about the number of bits you want to invert.
    x ^ 0x1f inverts the 5 least significant bits of x.
    x ^ 0xff inverts the 8 least significant bits of x, and so on.


    --Irmen de Jong
  • Michael Peuser at Aug 17, 2003 at 9:41 pm
    "Dennis Lee Bieber" <wlfraed at ix.netcom.com> schrieb im Newsbeitrag
    news:c84511-ol3.ln1 at beastie.ix.netcom.com...
    Elaine Jackson fed this fish to the penguins on Saturday 16 August 2003
    09:58 pm:

    Is there a function that takes a number with binary numeral a1...an to
    the number with binary numeral b1...bn, where each bi is 1 if ai is 0,
    and vice versa? (For example, the function's value at 18 [binary
    10010] would be 13
    [binary 01101].) I thought this was what the tilde operator (~) did,
    [but when I
    went to try it I found out that wasn't the case. I discovered by
    experiment (and verified by looking at the documentation) that the
    tilde operator takes n to -(n+1). I can't imagine what that has to do
    with binary numerals.
    [..]
    You've had lots of answers at the moment though I haven't seen anyone
    explain away the "+1" part...

    Most computers use twos-complement arithmetic to avoid the problem of
    having two valid values for integer 0, which is what appears in ones
    complement arithmetic.

    For argument, assume an 8-bit integer. The value of "5" would be
    represented as 00000101. The one's complement negative would be
    11111010. So far there isn't any problem... But consider the value of
    0, represented as 00000000. A one's complement negative would become
    11111111 -- But mathematically, +0 = -0; in one's complement math, this
    does not hold true.

    So a little trick is played, to create twos complement... To negate a
    number, we take the ones complement, and then add 1 to the result. The
    "5" then goes through: 00000101 -> 11111010 + 1 -> 11111011... Looks
    strange, doesn't it... But watch what happens to that 8-bit 0: 00000000
    -> 11111111 + 1 -> (overflows) 00000000.... Negative 0 is the same as
    positive 0.
    [..]

    I have the impression (may be wrong) that you are working under the
    misconception that there can be a "natural" binary represensation of
    negative numbers!?
    Three conventions have commonly been used so far:
    1- Complement
    2-Complement
    Sign tag plus absolut binary values

    All of them have their pros and cons. For a mixture of very technical
    reasons (you mentioned the +0/-0 conflict, I might add the use of binary
    adders for subtraction) most modern computers use 2-complement, and this now
    leads to those funny speculations in this thread. ;-)

    Kindly
    Michael P
  • Elaine Jackson at Aug 17, 2003 at 11:26 pm
    "Michael Peuser" <mpeuser at web.de> wrote in message
    news:bhosrr$u57$06$1 at news.t-online.com...
    I have the impression (may be wrong) that you are working under the
    misconception that there can be a "natural" binary represensation of
    negative numbers!?
    Three conventions have commonly been used so far:
    1- Complement
    2-Complement
    Sign tag plus absolut binary values
    The last alternative sounds like what I was assuming. If it is, I would argue
    that it's pretty darn natural. Here's a little function to illustrate what I
    mean:

    def matilda(n): ## "my tilde"
    if 0<=n<pow(2,29):
    for i in range(1,31):
    iOnes=pow(2,i)-1
    if n<=iOnes:
    return iOnes-n
    else:
    raise
  • Tim Peters at Aug 18, 2003 at 12:18 am
    [Michael Peuser]
    I have the impression (may be wrong) that you are working under the
    misconception that there can be a "natural" binary represensation of
    negative numbers!? Three conventions have commonly been used so far:
    1- Complement
    2-Complement
    Sign tag plus absolut binary values
    [Elaine Jackson]
    The last alternative sounds like what I was assuming. If it is, I
    would argue that it's pretty darn natural. Here's a little function to
    illustrate what I mean:

    def matilda(n): ## "my tilde"
    if 0<=n<pow(2,29):
    for i in range(1,31):
    iOnes=pow(2,i)-1
    if n<=iOnes:
    return iOnes-n
    else:
    raise
    As Carl Banks pointed out, under this meaning the fundamental identity

    ~~x == x

    almost never holds. For example, matilda(18) is 13, but matilda(13) is 2.
    ~ is its own inverse under Python's meaning. You can see that formally by
    algrebraic manipulation:

    ~~n = ~(-(n+1)) = -(-(n+1)+1) == n

    or more easily by noting that ~x in Python acts the same as xor'ing x with
    an infinite string of 1 bits.
  • Grant Edwards at Aug 18, 2003 at 3:12 am

    In article <bhosrr$u57$06$1 at news.t-online.com>, Michael Peuser wrote:

    I have the impression (may be wrong) that you are working under the
    misconception that there can be a "natural" binary represensation of
    negative numbers!?
    Three conventions have commonly been used so far:
    1- Complement
    2- Complement
    Sign tag plus absolut binary values

    All of them have their pros and cons. For a mixture of very technical
    reasons (you mentioned the +0/-0 conflict, I might add the use of binary
    adders for subtraction)
    The latter is _far_ more important than the former. Being able
    to use a simple binary adder to do operations on either signed
    or unsigned values is a huge savings in CPU and ISA design. I
    doubt that anybody really cares about the +0 vs. -0 issue very
    much (IEEE FP has zeros of both signs, and nobody seems to
    care).
    most modern computers use 2-complement, and this now leads to
    those funny speculations in this thread. ;-)
    --
    Grant Edwards grante Yow! An Italian is COMBING
    at his hair in suburban DES
    visi.com MOINES!
  • Bengt Richter at Aug 18, 2003 at 4:11 am

    On Sun, 17 Aug 2003 22:18:39 GMT, Dennis Lee Bieber wrote:
    Michael Peuser fed this fish to the penguins on Sunday 17 August 2003
    02:41 pm:
    I have the impression (may be wrong) that you are working under the
    misconception that there can be a "natural" binary represensation of
    negative numbers!?
    Apologies if I gave that impression... the +/- 0 technical affair is
    the main reason I went into the whole thing...
    Three conventions have commonly been used so far:
    1- Complement
    2-Complement
    Sign tag plus absolut binary values

    All of them have their pros and cons. For a mixture of very technical
    reasons (you mentioned the +0/-0 conflict, I might add the use of
    binary adders for subtraction) most modern computers use 2-complement,
    and this now leads to those funny speculations in this thread. ;-)
    From a human readable standpoint, your third option is probably the
    most "natural"; after all, what is -19 in human terms but a "pure" 19
    prefaced with a negation tag marker... (I believe my college
    mainframe's BCD hardware unit actually put the sign marker in the
    nibble representing the decimal point location -- but it has been 25
    years since I had to know what a Xerox Sigma did for COBOL packed
    decimal <G>).

    ie, 00010011 vs -00010011 <G>

    1s complement is electrically easy; just "not" each bit.

    2s complement is mathematically cleaner as 0 is 0, but requires an
    adder to the 1s complement circuit... Though both complement styles
    lead to the ambiguity of signed vs unsigned values
    Everyone says "two's complement" and then usually starts talking about numbers
    that are bigger than two. I'll add another interpretation, which is what I thought
    when I first heard of it w.r.t. a cpu that was designed on the basis that all its
    "integer" numbers were fixed point fractions up to 0.9999.. to whatever precision
    the binary fractional bits provided. There was no units bit. And if you took one
    of these fractional values 0.xxxx and subtracted it from 2.0, you would have a
    complementary number with respect to two. Well, for addition and subtraction, that turns
    out to work just like the "two's complement" integers we are used to. But since the
    value of fractional bits were all in negative powers of two, squaring e.g., .5 had
    to result in a consistent representation of 0.25 -- i.e. in binary squaring 0.1
    resulted in 0.01 -- which is shifted one bit from what you get looking at the numbers
    as integers with the lsb at the bottom of the registers and the result.

    I.e., a 32-bit positive integer n in the fractional world was n*2**-31. If you square
    that for 64 bits, you get n**2, but in the fractional world that looks like (n**2)*2**-63,
    where it's supposed to be (n*2**-31)**2 => (n**2)*2**-62 with respect to the binary point.
    The fractional model preserved an extra bit of precision in multiplies.

    So on that machine we used to count bits from the left instead of the right, and place imaginary
    binary points in the representations, so a binary 0.101 could be read as "5 at 3" or "2.5 at 2"
    or "10 at 4" etc. And the multiplying rule was x at xbit times y at ybit => x*y at xbit+ybit.

    You can do the same counting the bit positions leftwards from lsb at 0, as we usually do now,
    of course, to play with fixed point fractions. A 5 at 0 is then 1.25 at 2 ;-)

    Anyway, my point is that there was a "two's complement" implementation that really meant
    a numeric value complement with respect to the value two ;-)

    Regards,
    Bengt Richter
  • Michael Peuser at Aug 18, 2003 at 7:42 am
    "Bengt Richter" <bokr at oz.net> schrieb im Newsbeitrag
    news:bhpjm1$g01$0 at 216.39.172.122...
    On Sun, 17 Aug 2003 22:18:39 GMT, Dennis Lee Bieber
    wrote:
    Michael Peuser fed this fish to the penguins on Sunday 17 August 2003
    02:41 pm:
    I have the impression (may be wrong) that you are working under the
    misconception that there can be a "natural" binary represensation of
    negative numbers!?
    Apologies if I gave that impression... the +/- 0 technical affair
    is
    the main reason I went into the whole thing...
    Three conventions have commonly been used so far:
    1- Complement
    2-Complement
    Sign tag plus absolut binary values

    All of them have their pros and cons. For a mixture of very technical
    reasons (you mentioned the +0/-0 conflict, I might add the use of
    binary adders for subtraction) most modern computers use 2-complement,
    and this now leads to those funny speculations in this thread. ;-)
    From a human readable standpoint, your third option is probably
    the
    most "natural"; after all, what is -19 in human terms but a "pure" 19
    prefaced with a negation tag marker... (I believe my college
    mainframe's BCD hardware unit actually put the sign marker in the
    nibble representing the decimal point location -- but it has been 25
    years since I had to know what a Xerox Sigma did for COBOL packed
    decimal <G>).

    ie, 00010011 vs -00010011 <G>

    1s complement is electrically easy; just "not" each bit.

    2s complement is mathematically cleaner as 0 is 0, but requires
    an
    adder to the 1s complement circuit... Though both complement styles
    lead to the ambiguity of signed vs unsigned values
    Everyone says "two's complement" and then usually starts talking about numbers
    that are bigger than two. I'll add another interpretation, which is what I thought
    when I first heard of it w.r.t. a cpu that was designed on the basis that all its
    "integer" numbers were fixed point fractions up to 0.9999.. to whatever precision
    the binary fractional bits provided. There was no units bit. And if you took one
    of these fractional values 0.xxxx and subtracted it from 2.0, you would have a
    complementary number with respect to two. Well, for addition and
    subtraction, that turns
    out to work just like the "two's complement" integers we are used to. But since the
    value of fractional bits were all in negative powers of two, squaring
    e.g., .5 had
    to result in a consistent representation of 0.25 -- i.e. in binary
    squaring 0.1
    resulted in 0.01 -- which is shifted one bit from what you get looking at
    the numbers
    as integers with the lsb at the bottom of the registers and the result.

    I.e., a 32-bit positive integer n in the fractional world was n*2**-31. If
    you square
    that for 64 bits, you get n**2, but in the fractional world that looks
    like (n**2)*2**-63,
    where it's supposed to be (n*2**-31)**2 => (n**2)*2**-62 with respect to
    the binary point.
    The fractional model preserved an extra bit of precision in multiplies.

    So on that machine we used to count bits from the left instead of the
    right, and place imaginary
    binary points in the representations, so a binary 0.101 could be read as
    "5 at 3" or "2.5 at 2"
    or "10 at 4" etc. And the multiplying rule was x at xbit times y at ybit
    => x*y at xbit+ybit.
    You can do the same counting the bit positions leftwards from lsb at 0, as
    we usually do now,
    of course, to play with fixed point fractions. A 5 at 0 is then 1.25 at 2 ;-)
    Anyway, my point is that there was a "two's complement" implementation
    that really meant
    a numeric value complement with respect to the value two ;-)

    Regards,
    Bengt Richter

    A very good point! I might add that this is my no means an exotic feature.
    Mathematically speaking there is great charme in computing just inside the
    invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo real
    operations. You have get track of the scale of course - it is a little bit
    like working with sliding rules if anyone can remember those tools ;-)

    Even modern chips have support for this format, e.g. there is the 5$ Atmel
    Mega AVR which has two kinds of multiplication instructions: one for the
    integer multiplication and one which automatically adds a left shift after
    the multiplication! I leave it as an exercise to find out why this is
    necessary when multiplying fractional numbers ;-)

    Negative numbers are formed according to the same rule for fractionals and
    integers:
    Take the maximum positive number: 2**32-1 or 0.999999
    Extend your scope
    Add one bit: 2*32 or 1
    Double it: 2*33 or 2
    Subtract the number in question
    Reduce your scope again

    Kindly Michael P
  • Grant Edwards at Aug 18, 2003 at 1:11 pm

    In article <bhq01j$tge$03$1 at news.t-online.com>, Michael Peuser wrote:

    A very good point! I might add that this is my no means an exotic feature.
    Mathematically speaking there is great charme in computing just inside the
    invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo real
    operations. You have get track of the scale of course - it is a little bit
    like working with sliding rules if anyone can remember those tools ;-)
    Sure. I've got two sitting at home. :)

    FWIW, it used to be fairly common for process-control systems
    to define operations only over the interval (-1,+1). This made
    implimentation easy, and the input and output devices
    (temp/pressure sensors, valves, whatnot) all had pre-defined
    ranges that mapped logically to the (-1,+1) interval.

    --
    Grant Edwards grante Yow! The SAME WAVE keeps
    at coming in and COLLAPSING
    visi.com like a rayon MUU-MUU...
  • Michael Peuser at Aug 18, 2003 at 2:58 pm
    "Grant Edwards" <grante at visi.com> schrieb im Newsbeitrag
    news:3f40d077$0$164$a1866201 at newsreader.visi.com...
    In article <bhq01j$tge$03$1 at news.t-online.com>, Michael Peuser wrote:
    A very good point! I might add that this is my no means an exotic
    feature.
    Mathematically speaking there is great charme in computing just inside
    the
    invervall (-1,+1). And if you have no FPU you can do *a lot* of pseudo
    real
    operations. You have get track of the scale of course - it is a little
    bit
    like working with sliding rules if anyone can remember those tools ;-)
    Sure. I've got two sitting at home. :)

    FWIW, it used to be fairly common for process-control systems
    to define operations only over the interval (-1,+1). This made
    implimentation easy, and the input and output devices
    (temp/pressure sensors, valves, whatnot) all had pre-defined
    ranges that mapped logically to the (-1,+1) interval.

    --
    Yes it simplifies a lot of matters, even when using full floating point
    numbers. Take OpenGL e.g. The colour space is a 1x1x1 cube. Very fine! No
    magic numbers near 256 ;-)

    Kindly
    Michael P
  • Elaine Jackson at Aug 17, 2003 at 11:35 pm
    In case any fellow newbies have been following this thread, here is the finished
    Nim script. It doesn't use bit-manipulation as much as I thought it would.
    (Still, I got the basics straight. Thanks again to everyone who helped out!) For
    a readable account of the optimal strategy for Nim, see Hardy & Wright's
    "Introduction to the Theory of Numbers". Peace.

    ## NIM
    from random import random
    piles=[0,0,0]

    ########################################

    def take(n,pileNum):
    if piles[pileNum]>=n:
    piles[pileNum]=piles[pileNum]-n
    print piles
    else:
    print "illegal move"

    ########################################

    def newGame():
    for i in range(3):
    piles[i]=int(9*random())+1
    print piles

    ########################################

    def indexOfMax():
    returnValue=0
    for i in range(1,3):
    if piles[i]>piles[returnValue]:
    returnValue=i
    return returnValue

    ########################################

    def leftmostBitIndex(n):
    if 0<n<pow(2,29):
    for i in range(1,31):
    if pow(2,i)>n:
    return (i-1)
    else:
    raise

    ########################################

    def yourMove():
    magicNum=piles[0]^piles[1]^piles[2]
    if magicNum==0:
    i=indexOfMax()
    piles[i]=piles[i]-1
    else:
    magicIndex=leftmostBitIndex(magicNum)
    targetIndex=(-1)
    for i in range(3):
    if (piles[i]>>magicIndex)%2==1:
    targetNum=piles[i]
    targetIndex=i
    break
    replacement=0
    for i in range(magicIndex):
    magicDigit=(magicNum>>i)%2
    targetDigit=(piles[targetIndex]>>i)%2
    if magicDigit==1:
    replacementDigit=(magicDigit-targetDigit)
    else:
    replacementDigit=targetDigit
    replacement=replacement+replacementDigit*pow(2,i)
    newNum=targetNum-(targetNum%pow(2,magicIndex+1))+replacement
    piles[targetIndex]=newNum
    print piles

    ############ not used in this script:

    def matilda(n):
    if 0<=n<pow(2,29):
    for i in range(1,31):
    iOnes=pow(2,i)-1
    if n<=iOnes:
    return iOnes-n
    else:
    raise
  • Tim Peters at Aug 18, 2003 at 1:11 am
    [Elaine Jackson]
    In case any fellow newbies have been following this thread, here is
    the finished Nim script. It doesn't use bit-manipulation as much as I
    thought it would. (Still, I got the basics straight. Thanks again to
    everyone who helped out!) For a readable account of the optimal
    strategy for Nim, see Hardy & Wright's "Introduction to the Theory of
    Numbers". Peace.

    ## NIM
    from random import random
    piles=[0,0,0]

    ...

    def yourMove():
    magicNum=piles[0]^piles[1]^piles[2]
    if magicNum==0:
    i=indexOfMax()
    piles[i]=piles[i]-1
    else:
    magicIndex=leftmostBitIndex(magicNum)
    targetIndex=(-1)
    for i in range(3):
    if (piles[i]>>magicIndex)%2==1:
    targetNum=piles[i]
    targetIndex=i
    break
    replacement=0
    for i in range(magicIndex):
    magicDigit=(magicNum>>i)%2
    targetDigit=(piles[targetIndex]>>i)%2
    if magicDigit==1:
    replacementDigit=(magicDigit-targetDigit)
    else:
    replacementDigit=targetDigit
    replacement=replacement+replacementDigit*pow(2,i)
    newNum=targetNum-(targetNum%pow(2,magicIndex+1))+replacement
    piles[targetIndex]=newNum
    print piles
    OK, what you're trying to do is force the xor of the pile counts to 0.
    There may be more than one way to do that. For example, if piles is

    [2, 6, 7]

    you can force a win by doing any of these:

    remove 1 from the 2 pile, leaving [1, 6, 7]
    remove 1 from the 6 pile, leaving [2, 5, 7]
    remove 3 from the 7 pile, leaving [2, 6, 4]

    because 1^6^7 == 2^5^7 == 2^6^4 == 0. Now the educational <wink> thing is
    that it's possible to find all these solutions easily *without* picking
    apart any bits:

    def myMove(piles):
    all = 0
    for count in piles:
    all ^= count
    for i, count in enumerate(piles): # enumerate() new in 2.3
    all_without_count = all ^ count
    if all_without_count < count:
    print "leave %d in pile #%d" % (all_without_count, i)

    For example,
    myMove([2, 6, 7])
    leave 1 in pile #0
    leave 5 in pile #1
    leave 4 in pile #2
    >>>

    Note that the function works for any number of piles, and doesn't care how
    big each pile may be; for example,
    myMove([2**50, 3**67, 5**12, 76**30])
    leave 92709463147897838211662074651978 in pile #3
    2**50 ^ 3**67 ^ 5**12 ^ 92709463147897838211662074651978
    0L
    >>>

    I think you'll enjoy figuring out why that function works; proving that it
    finds all solutions is a bit delicate, but not truly hard.
  • Elaine Jackson at Aug 18, 2003 at 4:50 pm
    Cool. Thanks a bunch. I find the documentation about 'enumerate' a little
    sketchy, but it seems like, as a first approximation, I can think of
    enumerate(piles) as the list of all pairs [ i, piles[i] ] with 0 <= i <
    len(piles). Is that right?


    "Tim Peters" <tim.one at comcast.net> wrote in message
    news:mailman.1061169151.30094.python-list at python.org...
    [Elaine Jackson]
    In case any fellow newbies have been following this thread, here is
    the finished Nim script. It doesn't use bit-manipulation as much as I
    thought it would. (Still, I got the basics straight. Thanks again to
    everyone who helped out!) For a readable account of the optimal
    strategy for Nim, see Hardy & Wright's "Introduction to the Theory of
    Numbers". Peace.

    ## NIM
    from random import random
    piles=[0,0,0]

    ...

    def yourMove():
    magicNum=piles[0]^piles[1]^piles[2]
    if magicNum==0:
    i=indexOfMax()
    piles[i]=piles[i]-1
    else:
    magicIndex=leftmostBitIndex(magicNum)
    targetIndex=(-1)
    for i in range(3):
    if (piles[i]>>magicIndex)%2==1:
    targetNum=piles[i]
    targetIndex=i
    break
    replacement=0
    for i in range(magicIndex):
    magicDigit=(magicNum>>i)%2
    targetDigit=(piles[targetIndex]>>i)%2
    if magicDigit==1:
    replacementDigit=(magicDigit-targetDigit)
    else:
    replacementDigit=targetDigit
    replacement=replacement+replacementDigit*pow(2,i)
    newNum=targetNum-(targetNum%pow(2,magicIndex+1))+replacement
    piles[targetIndex]=newNum
    print piles
    OK, what you're trying to do is force the xor of the pile counts to 0.
    There may be more than one way to do that. For example, if piles is

    [2, 6, 7]

    you can force a win by doing any of these:

    remove 1 from the 2 pile, leaving [1, 6, 7]
    remove 1 from the 6 pile, leaving [2, 5, 7]
    remove 3 from the 7 pile, leaving [2, 6, 4]

    because 1^6^7 == 2^5^7 == 2^6^4 == 0. Now the educational <wink> thing is
    that it's possible to find all these solutions easily *without* picking
    apart any bits:

    def myMove(piles):
    all = 0
    for count in piles:
    all ^= count
    for i, count in enumerate(piles): # enumerate() new in 2.3
    all_without_count = all ^ count
    if all_without_count < count:
    print "leave %d in pile #%d" % (all_without_count, i)

    For example,
    myMove([2, 6, 7])
    leave 1 in pile #0
    leave 5 in pile #1
    leave 4 in pile #2
    Note that the function works for any number of piles, and doesn't care how
    big each pile may be; for example,
    myMove([2**50, 3**67, 5**12, 76**30])
    leave 92709463147897838211662074651978 in pile #3
    2**50 ^ 3**67 ^ 5**12 ^ 92709463147897838211662074651978
    0L
    I think you'll enjoy figuring out why that function works; proving that it
    finds all solutions is a bit delicate, but not truly hard.
  • Elaine Jackson at Aug 19, 2003 at 4:43 am
    In case anybody is still following this thread, the correctness and completeness
    of the 'myMove' function (see below) are easy consequences of the following
    lemmas:
    1) xor is commutative and associative
    2) zero is the identity element for xor
    3) every number is its own inverse wrt xor


    "Tim Peters" <tim.one at comcast.net> wrote in message
    news:mailman.1061169151.30094.python-list at python.org...
    [Elaine Jackson]
    In case any fellow newbies have been following this thread, here is
    the finished Nim script. It doesn't use bit-manipulation as much as I
    thought it would. (Still, I got the basics straight. Thanks again to
    everyone who helped out!) For a readable account of the optimal
    strategy for Nim, see Hardy & Wright's "Introduction to the Theory of
    Numbers". Peace.

    ## NIM
    from random import random
    piles=[0,0,0]

    ...

    def yourMove():
    magicNum=piles[0]^piles[1]^piles[2]
    if magicNum==0:
    i=indexOfMax()
    piles[i]=piles[i]-1
    else:
    magicIndex=leftmostBitIndex(magicNum)
    targetIndex=(-1)
    for i in range(3):
    if (piles[i]>>magicIndex)%2==1:
    targetNum=piles[i]
    targetIndex=i
    break
    replacement=0
    for i in range(magicIndex):
    magicDigit=(magicNum>>i)%2
    targetDigit=(piles[targetIndex]>>i)%2
    if magicDigit==1:
    replacementDigit=(magicDigit-targetDigit)
    else:
    replacementDigit=targetDigit
    replacement=replacement+replacementDigit*pow(2,i)
    newNum=targetNum-(targetNum%pow(2,magicIndex+1))+replacement
    piles[targetIndex]=newNum
    print piles
    OK, what you're trying to do is force the xor of the pile counts to 0.
    There may be more than one way to do that. For example, if piles is

    [2, 6, 7]

    you can force a win by doing any of these:

    remove 1 from the 2 pile, leaving [1, 6, 7]
    remove 1 from the 6 pile, leaving [2, 5, 7]
    remove 3 from the 7 pile, leaving [2, 6, 4]

    because 1^6^7 == 2^5^7 == 2^6^4 == 0. Now the educational <wink> thing is
    that it's possible to find all these solutions easily *without* picking
    apart any bits:

    def myMove(piles):
    all = 0
    for count in piles:
    all ^= count
    for i, count in enumerate(piles): # enumerate() new in 2.3
    all_without_count = all ^ count
    if all_without_count < count:
    print "leave %d in pile #%d" % (all_without_count, i)

    For example,
    myMove([2, 6, 7])
    leave 1 in pile #0
    leave 5 in pile #1
    leave 4 in pile #2
    Note that the function works for any number of piles, and doesn't care how
    big each pile may be; for example,
    myMove([2**50, 3**67, 5**12, 76**30])
    leave 92709463147897838211662074651978 in pile #3
    2**50 ^ 3**67 ^ 5**12 ^ 92709463147897838211662074651978
    0L
    I think you'll enjoy figuring out why that function works; proving that it
    finds all solutions is a bit delicate, but not truly hard.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedAug 17, '03 at 4:58a
activeAug 19, '03 at 4:43a
posts24
users10
websitepython.org

People

Translate

site design / logo © 2022 Grokbase