Tech

An introduction to Binary

Posted by Mike Dailly on 25 April 2014

Back in the day, binary and hex were a way of life, probably because we never used high level languages such as BASIC as they were simply too slow, and that left us pretty close to the metal of the machine. The advantage of this of course, is that we understood exactly how the machine worked, and lots of shortcuts to make things faster - a must in the days when CPUs barely crawled along at a few Mhz at a time.

These days however, with the power of even the most average PC, developers no longer have to know any of this. They can do things the long way, and the speed of the machine (not to mention it's more complex CPU construction) will make up for any short comings this approach has.

As a very simple example, in the past multiplying by 32 might have taken several CPU cycles to execute, while a simple binary operation to do the same thing would have taken only 1. As machines have gotten more complex, they have also reduced the time many complex instructions take to run so that now, a 32x32 bit multiply might well only take 1 cycle - same as the binary operator.

This is great news of course, it means we no longer have to optimise every line of code we write, but if this is the case - should we really care about binary at all?

Sure we should. While it's true you can still get some speed ups - and sometimes these can be significant, knowing the CPU better can also lead to writing code better, packing data better, and making some tasks just plain simple.

Number theory

So let's look at the most basic binary theory first; how numbers are created. Let's look at this table first....

000 = 0
001 = 1
010 = 2
100 = 4

In binary land, 10 really does equal 2! Each bit is 2 times the previous value with the first bit being equal to 1. So bit 2 = 2, bit 3 = 4, bit 4 = 8 and so on (as shown below in this BYTE table - a byte being a collection of 8 bits)

00000001 = 1
00000010 = 2
00000100 = 4
00001000 = 8
00010000 = 16
00100000 = 32
01000000 = 64
10000000=128

So how do we create more complex numbers? Well a single binary number can only store a 0 or 1, and that's it, so for more complex numbers we add bits together. If for example we wanted to make 6, we would add 4 and 2 together like so.

00000010   = 2
00000100   = 4
00000110   = 6

This is true of all binary numbers, and how the computer makes up any number. Let's take a slightly more complicated number: 23. The number 23 is actually made up of 1+2+4+16 or 00010111. How about a much more complex example - 196? Well, that's made from 128+64+4 or  11000100.  So actually not that complex really. If we start doing values outside of the range of a BYTE (which can store numbers from 0 to 255), it does start to get a little harder to track. For example, 217,361 is 110101000100010001 in binary. Or, 1+16+256+...well, I'll let you work this one out. But the rules are the same. Each number is created from adding multiple bits together.

 

Data types.

In computers today, there are several native types (that is, a type the CPU knows how to control directly), BYTE, SHORT, WORD, LONG. These sometimes have different names, but basically they equate  to 8bits, 16bits, 32bits and 64bits. (You can also get 128bits or more depending on CPU etc.) 1024 bytes make up a Kilobyte (K), 1024K make up a Megabyte(MB), 1024MB make up a Gigabyte (GB), 1024GB make up a Terabyte (TB) and finally 1024TBs make up an Petabyte (PB). Actually... they do go on (and on, and on..), but these are the ones in use today. (see http://en.wikipedia.org/wiki/Byte for more info).

Back in the 80's when the original batch of 8bit home computer was launched, we had anywhere from 1K to 128K, that is 128*1024 bytes (or 131,072 bytes).  The commodore 64 was famed for its huge 64K RAM (or 65,536 bytes), while the ZX Spectrum started out with 16K and then 48K, with 128K coming in later. What I actually started out on, was a puny 1K ZX81; yep, just 1024 bytes of RAM - not all of it usable by me either. Fortunately you could buy a RAM PACK, and that got you up to 16K (at the time – you could get up to 64K later – if you were RICH!), which was obviously, far more usable, and fun!

So, the reason for listing all these numbers is to show the utter explosion of storage space, modern computers now have. We made some amazing games back when 48K or 64K was the norm, yet these days my PC has 8GBs of RAM! Or... 8,589,934,592bytes!! Lastly, if a byte has 8 bits... then 8GB of ram, is actually 64Gigabits of storage - or 68,719,476,736 bits.

 

Binary Operators

Now what does this mean for in binary? Well, let's say I want to store a TRUE or FALSE value. Usually compilers will use an INT (an INT is usually defined as a signed 32bit number) and then simply assign it to 0 or 1. However, having only 2 states, a TRUE/FALSE value is ideal to store in a BIT, and if we did this we could store 32 for each INT rather than just one. So how would we do this? Well pretty easily it turns out.

flags = flags | 1;

The "|" operator is an OR, and this means the above instruction ORs 1 into flags If you remember from earlier, using a 1 will set the first bit. If we wanted to set the second bit, we would do this:-

flags = flags | 2;

We OR in 2, because bit patter 00000010 is equal to 2.  So what exactly does the binary OR operator do? Well, it merges all the bits together into a single value, like this...

010110100
110011001
110111101

Here's what's known as a TRUTH table for the OR operator.

00 | 00 = 00
00 | 01 = 01
01 | 01 = 01
01 | 00 = 01

So where there is a value, there is always a value, where there are 2 zeros, it'll stay zero. The advantage of using BITS as a true/false state, is they you can set several flags in one operation, something you simply couldn't do with a normal Boolean value. For example, let's say bit 1 is an active flag, and bit 3 is a visible flag. We could set both by doing this:-

flags = flags | 5;

This is because 5 is 00000101 in binary, and following the rule above, flags will get both these 2 bits merged with its own. So even if bit 1 was already set, the operation still works and bit 3 will now also be set.

Now, how about clearing flags? Well this is where the binary AND operation comes in. When you AND something you the bits that are set in the mask are kept, while the bits that are clear in the mask, are removed - like this...

01110010101
00110000100
00110000100

As you can see, where there is a bit in each value, the bit is kept, and where there is a mix or 0's and 1's these are reset to 0. Here's the truth table for ANDing.

00 & 00 = 00
01 & 00 = 00
00 & 01 = 00
01 & 01 = 01

So, only when there is a bit in each place will it be kept.  What this means, is that just being able to set multiple flags at once, you can also clear multiple flags at once. For example, let's take the case about, but this time clear them. We want to clear bits 1 and 3 (giving us the value 5), but in remembering the truth table above, what we want to do is keep all the other bits, can clear bits 1 and 3.  This would be 11111111111111111111111111111010 (32bits). This "mask" keeps ALL bits currently set, but clears the two bits we actually want cleared.  So if had a value of 1000111011 and I wanted to clear bits 1 and 3 using the mask above, I would end up with this...

00000000000000000000001000111011
11111111111111111111111111111010
00000000000000000000001000111010

This is great, but if we had to work this out every time we needed to clear flags, it would become tiresome. What we need is a way to flip bits easily (and preferably without CPU cost). Fortunately there is an easy way of doing this by using the NOT operator.

The NOT operator is just what it says - NOT those bits. Here's a truth table for NOT.

~ 00 = 11 
~ 01 = 10
~ 10 = 01
~11 = 00

This operator makes removing flags very simple, and better yet, it's usually a compile time optimisation meaning if you're using a constant number (i.e. not a variable) then the compiler will flip the bits automatically for you. Take this statement where we want to clear bits 1 and 3 again...

a = a & ~5;

This will actually compile down to just "a & 11111111111111111111111111111010". This makes life pretty simple in terms of clearing flags.

The last operator we want to look at is EOR (Exclusive OR, sometimes called XOR), this operator flips the bits set in both values. Here's the EOR truth table...

0 ^ 0 = 0
0 ^ 1 = 1
1 ^ 0 = 1
1 ^ 1 = 0

This is a curious one, but incredibly useful. For example, let's say we want a counter that simply counts from 0 to 1 and back to 0 (toggling between 0 and 1), we could add one and do an IF to see if it's gotten to 2, and then reset it back to 1. Or...we could add 1 and then AND it with 1 (since 01+01 = 10, and 10 & 01 = 0) or we can do this...

a = a ^ 1;

What this does, is first time through is 0 ^ 1 = 1, then the second time 1 ^ 1 = 0, thereby toggling things back and forth from 0 to 1.

 

So... lets recap this lesson - OR, AND, NOT and EOR let us manipulate bits with relative ease, allowing at the simplest level to control multiple bits at once. We can obviously use these operations for other things - like masking sprites, doing integer MOD operations (using AND) or doing nice looping counters - again using AND.

 

Simple Binary Arithmetic

So how does a computer add? Well let's look at a very simple example 1+1.

00000001
00000001
00000010

Just like normal additions, we add numbers together and then we overflow into the next column, but unlike a normal decimal addition, we can only go to 1, not 9. So adding a 1+1 means we overflow into 10.  So let's look at a more complex example.

01011011 = 91
00101101 = 45
10001000 = 136

It's obviously harder to see here, but the overflows ripple along until there are no ones in a column - or 2 at which point an overflow bit makes 3 and it stays there. Fortunately, you don't ever have to worry about this unless you want to add very large numbers together (like 2x128bit numbers), but that's a lesson for another time. It’s also worth noting, that computers can only add (or subtract, multiply or divide) 2 numbers at once, even SIMD is based on 2 calculations at once, but doing multiple calculations in parallel. “But that’s more than 2 numbers at once,” I hear you say! Well, no… it's not. Take 19+19+19. Being human, we can add all the 9’s together, carry the 2 and then on we go! But computers can’t do that, but what they can do is this: (19+19)+19. So they’ll do each calculation in blocks of 2.

The binary calculation that is of interest to us, and of great use, is multiplication (and division).  Computers only multiply in 2s, and to do more it'll break a number apart, and then add all the results together. Let's take some very simple examples first.  4*2 = 8. Now to multiply by 2 in binary, we shift all the bits to the LEFT by one. Like this....

00000100 * 2 = 00001000 = 8

All the bits in this case have moved to the left by one, making it move from the 3rd bit, to the 4th, and changing the value from 4 to 8.

How about a larger number?

101 = 01100101 * 2 = 11001010 = 202

Again, all the bits move on one, and that multiples by 2. So, how about a multiple by 4? Easy, we shift everything left by 2, rather than one. So how about 16, or 128? This would require a left shift of 4 bits, or 7 bits respectively. This is incredibly useful; it means we can do simple multiplications by simply moving bits around. To do this, we use the SHIFT operator <<. Here are some examples.

00000001 << 1 = 000000010 = 2
00000001 << 2 = 000000100 = 4
00000001 << 3 = 000001000 = 8
00000001 << 4 = 000010000 = 16
00000001 << 5 = 000100000 = 32
00000001 << 6 = 001000000 = 64
00000001 << 7 = 010000000 = 128
00000001 << 8 = 100000000 = 256

Now, aside from being very useful for fast/simple multiplications, it's also very useful for setting specific BITs, without having to figure out the bit value. Let's say we wanted to set bit 27, what number is that? (67108864 by the way!), well we can use the syntax above to easily set flags like this...

A = A | (1<<27)

Okay... so actually this would be bit 26 the way I've been describing things so far (as bits have been starting at one), but actually... bits start at bit 0, and go upwards, not bit 1.  So while there are 32 bits in an INTEGER, the bits range from 0 to 31, not 1 to 32. This is actually pretty useful, as we can now set up CONSTANTS for bit numbers. 

So let's say Bit 27 is an active flag, and bit 0 is an exploding flag. How can we set both?

ACTIVE= 27;
BOOM = 0;
A = A | (1<<ACTIVE) | (1<<BOOM);

This may look like a lot of code, but if these numbers are constants, the compiler will pre-compile these operations into a single value so that we end up with this as actual code.

A = A | 13421772;

Clearing these bits (as we saw above) is simply a matter of using the NOT modifier, like this…

A = A  & ~((1<<ACTIVE) | (1<<BOOM));

So this happily lets us set and clear any bits we’d like, and it also lets us compress out data structures massively. Compressing data structures is a good thing, because if you use less memory, you get less cache misses, and your code just runs faster. Put it this way, what’s quicker, copying 32Mbytes or data, or 4Mbytes? Well, quite clearly 4 is. So if you can pack all yours flags down into a single memory access, this is good!

 

Binary Division

I said above that binary division was also incredibly useful, - why?

Well, first, let's take a quick look at how you do division, and why it’s going to be so useful. Let's take a simple number – 64, and divide by 32 (which gives 2 by the way, just in case you were wondering!)

64 / 32 = 01000000  >> 5 = 00000010 

So there you do, shift the single bit down by 5 (which is the number of shifts required for 32 – look above), gives us 2.  But what happens if here are other bits in there? Well let's take a look.

68 / 32 = 01000100  >> 5 = 00000010 

So there you go…. It’s exactly the same. The bits we shift down are simply lost.  This is actually really useful,  because when dividing down if we need the remainder, there’s an even easier way to get it, which I’ll get to in a moment. But first, let's take a practical example. I have an X and Y position, and I want to get the grid cell this falls in, where the grid is 32x32 in size. This method allows is to store objects, collisions, flags – all manner of things, and access them very quickly. So here we go..

var  X_index = x>>5;
var Y_index = y>>5;
cell_data = mygrid[# X_index,Y_index];

This is quick, very quick. This avoids the need to do a floating point divide, and then a floor( ) calculation – which all adds up.

So, what if we wanted the remainder? Perhaps this remainder is used as some kind of order or something, whatever the reason, getting a remainder is as simple as doing an AND.

var r = x & 31
var X_Index = x>>5;

Now, the sharp-sighted amongst you might have noticed I’ve used both here (as is so often the case), but this is still only a couple of instructions.  But why the 31? Well, as bit 5 is 32, then all the bits below would be 31, and that’s the maximum remainder so that’s what we AND with (we could also use ((1<<5)-1) which would make 32-1 = 31 (which is 0x1F in hex BTW). Now, if I were to do this without understanding binary, it would do this…

var r = x mod 32;
var X_Index = floor(x/32);

So why is this much worse? Well, in order to divide by 32, we have to execute a floating point divide – which obviously takes time, but in order to do the mod 32, you actually have to do another one!  If we wer doing this in assembler, we actually get BOTH values in one divide, but you don’t get this in high level languages (well… not very often), and so you have to do all the work twice. This adds up, especially if you’re doing a tight loop with lots of calculations like this. Integer divides as shown above really help optimising your game.

 

Tile Alignment

So, one of the driving forces behind this article, was to show how to use flags properly, but also in an attempt (possibly a vain one) to get folk to write platform games properly. Currently GameMaker devs use the frankly horrible command place_free(), and then when a collision is found, trying to slowly move the object out by either looping around an x—or y—(or something) while continuing to execute that horrible command, or by using the move_outside_all() command (which does this anyway. Take a look at my blog on precise collision and think about just how much CPU time you burn when doing this.

So, what’s the faster way to do this? Well, if we use proper power-of-2 tiles, then we have a very simple method that’s also lightning fast. Studio has a platformer example that uses this method, but the key part is how we move out of a solid tile, and reposition the sprite. 

If we are moving right, and we’ve moved into a collision block, then as we know everything is aligned to 32, we need to also align the sprite to a 32 pixel boundary – preferably the one to the left, so the sprite is moved OUT of the collision! This is really easy, knowing the rules we’ve used above to get the remainder, and knowing how to get the inverse of bits, we can simply do this….

X = x&~31;

That’s right, that’s ALL that it takes to align to a 32 pixel boundary. By changing the 31 we can align to anything we like – as long as it’s a power of 2.  (This is the equivalent of dividing by 32, then multiplying by 32, thereby removing the lower bits.)

If we wanted to align to the right, then we’d do the above, but then add 32 to move it into the next tile. Simple. All this makes the whole collision code monumentally faster, and lets you spend the CPU time where you really need it.

 

Keys and Doors

Another example.... Let’s say you have a level with a few doors, and a key for each. How can you easily mark a key for a key? Well, normally you'd just assign an ID to the key and the door. So what if you wanted a key to open 2 or 3 doors? Easy. You use a MASK. The door would have a single "bit" assigned like so door_id=1 (0001), another with door_id=2 (0010), door_id=4 (0100),door_id=8 (1000) and so on. If we wanted the key to open door 1 and 3, thje key would have the MASK of 5 (which is 101 in binary). If we perform and AND of this, and it comes out "not zero", then we know if the key can open the door.  You could also have keys that opened nothing by having a MASK of 0. See the code below for the actual check....

if( (key_id & door_id) !=0 )  { opendoor(); }

 

Looping counters

Let's say we want a simple animation counter, counting from 0 to to 15 (as we have 16 frames of animation), now we can either do an increment, and then do an IF, or we can use our knowledge of binary and remove the IF completely. IF's are slow, and if we don't need them, we should remove them.

counter = (counter+1)&31;

Since 16 frames is a power of 2 number, and since 0 is included in the counter, we can reduce the POW2 number by 1 and use it as a MASK, and using that we can use it to wrap our counter. If the counter moves from 15 to 16, we end up with bit pattern 10000, and if we then AND with 15 (bit pattern 01111) we end up with 0. This means the above code is incredibly useful for wrapping counters - as long as you use POW2 frame numbers.

 

Power of 2 check

What if you wanted to check to see if something was a power of 2? Well, here's a neat little trick.. This will return TRUE if argument0 is a power of 2.

return (argument0&(argument0-1))==0;

So if we had the number 51 (110011) what does this do? Well, we get this... 110011 & 110010, which obviously leaves us with FALSE, as there are lots of "bits" left after the AND. If we had 64 1000000, then it becomes this...  1000000 & 0111111 which DOES lesave us 0, so it's TRUE.

 

Index alignment

Here's a quick bit of code to align to power of 2 numbers. (1,2,4,8,16 and so on). This can be very useful for memory allocation, or making sure you write data to proper boundaries.In this example, argument0 needs to be aligned to argument1 bytes, where argument1 is a power of 2 number. This little script rounds UP to the next boundary of the desired number.

return (argument0 + (argument1-1)) & ~(argument1-1);

 

Lastly.... here's a famous list of cool bit twiddling tricks you can do for some pretty neat operations: Bit Twiddling Hacks

So there you go, I hope this has been useful, and gives you some insight into some valuable optimisations and data processing.

 

 

 

 

Back to Top