Introduction: True or False: Boolean Is a Data Type

True! A boolean is a data type!

However, it’s also a term that gets thrown around in the electronics world by programmers presuming that everybody else knows what they are talking about; I can personally attest that this is not always the case. In light of this, let’s go over some of the data types that are commonly used in programming.

Step 1: What Is a Data Type?

In a broad sense, data types are ways to represent different kinds of information in a computer processor or FPGA. Today we will be discussing booleans, integers, chars, and floating point numbers, all of which are commonly known as primitive data types.

There are also other types of data, known as composite data types, although these are generally just combinations of the primitive data types, such as an array.

What's important for us to know is that computer processors, microprocessors, and FPGAs all store these data types in a binary (digital) format as a series of 1’s and 0’s, represented as high and low logic levels. Within the computer system each ‘1’ or ‘0’ (a bit) is classically grouped together in sets of eight to form what is known as a byte. Because a processor will communicate in this byte format to other components and devices, every data type, even if it only needs one bit to be represented, will be stored inside of a byte.

Step 2: Details on the Boolean Type

A boolean, as you might have guessed, is a data type that represents “true” or “false”. However, when storing information on a computer, there is no hardware that is capable of storing a literal “true or false”, so many programming languages will instead have the true and false values be equivalent to the numbers ‘1’ and ‘0’ represented by high and low voltages. This allows for the potential operation of “true + true” being equal to ‘2’.

How this fun fact is actually useful in a real life situation is beyond me, but hey, fun facts are still fun!

Rather, I find the true or false nature of a boolean a great way to keep track of something that is triggered “on or off” or “left and right”.

Picture from jdhitsolutions.com

Step 3: Details on the Character Type

A character, commonly called a char, is an array of bits (one byte worth in the C++ programming language) that often defines a visual representation of a symbol. These symbols are commonly defined on the ASCII table and consist of many of the characters that you see on your keyboard.

Characters are technically a composite data type as many characters can be placed together to form a "string" (a series of characters) as well as the fact that an array of bits (representing pixels on a display that can be turned on or off) are needed to display each of the characters. With the right resources, you can also define your own set of characters to display “non-standard” characters.

Step 4: Details on the Integer Type

Integers, usually shortened to just "int", are the standard way to store numbers within code. They are able to store a variety of numbers including binary, decimal, and hexadecimal numbers and generally do not require any special manipulation process on the programmer’s part to store one number style over a different one inside of an int.

On chipKIT™ boards with their PIC32 processor, integers are 32 bits long allowing them to store numbers ranging from -2,147,483,647 to +2,147,483,647 for the signed (positive or negative) integer and from 0 to 4,294,967,295 for the unsigned (just positive) integer. Despite being natively 32 bits long, it is possible to declare an integer that is of a smaller value such as only 16 bits or 8 bits in order to conserve space within the processor, although naturally this will be done at the expense of an exponentially shortened number size that those integers can accept.

The restriction with integers is that they are unable to accept any decimal places. If you attempted to give an integer a value of 31.96, it will only store the 31. The 0.96 will instead be truncated, meaning that it will be discarded entirely with no rounding whatsoever.

Step 5: Details on the Float

The final data type that I will talk about is a floating point. This is a data type that is able to accept numbers with decimal places.

The way that this data type works is through a style of scientific notation; a given number is arranged so that there is one value before the decimal point and the rest of the significant numbers are after the decimal point with eight bits out of the 32 dedicated towards the exponent indicating the magnitude of the number. In truth, I do not understand much more about how a float internally works beyond that, but there is (as expected) a nice Wikipedia article on floats.

In terms of real life application that most of us would prefer to know, this equates to a “float” being able to handle and keep track of about seven decimal points. A “double” is similar to a float, except that it uses two sets of 32 bits for a total of 64 bits (eight bytes). As a double-precision floating value (hence the name), it is able to keep track of about 16 decimal places, allowing for more accurate calculations, although calculations involving doubles do take longer than floats because they take up two slots of memory. Luckily, when using a fast microcontroller such as Digilent's chipKIT boards, this time difference is negligible for most applications that I personally do.

Picture from Single-precision floating-point format Wikipedia article by Fresheneesz

Step 6: What Now?

I would personally recommend going off into the world and trying your hand at some coding. Naturally, there are many places that you can find to learn some more about coding, whether for a microcontroller, an FPGA, HTML, or even LabVIEW, and they all have their own uses in industry.

Where I personally started learning how to do some coding at learn.digilentinc.com with a chipKIT microcontroller.

Feel free to post in the comments below if you have any questions!