Today, I'm at Eastside Coffee Bar & Workspace. It's a nice little coffee shop with a bunch of seating available to sit and work. I am not at my house because I wanted to get away from the noise of plumbing work that started today.
I'm focusing on the basic primitive data types. This is one of those topics that is part of every programmer's foundational knowledge but I think reviewing and going deeper into these can't hurt anyone.
First we'll focus on primitive data types. They're called "primitive" data types because almost all languages implement these data types at a core level, whether through the base language or through an almost always included standard library.
The simplest and one of the most useful data type, the humble boolean. Values can either be
false. Depending on the language, they might be all lowercase, capitalized, or all upper case. In Python, they're written as
false. In FORTRAN, they're written as
Moving past just how to write booleans, what are they useful for? Well, almost everything. To me, they're a parallel to computer hardware because everything at a fundamental level like transistors, RAM, CPU, and control flow is managed by ones and zeros. Since bits exist as either a 1 or 0, a boolean only needs one bit to store it but as most modern computers use a minimum of a single byte to store any data, booleans are stored in a byte.
Booleans are so very useful in helping the flow of programs, e.g. when writing a conditional.
I didn't want to call this section Ints or Floats because at a human level, we're talking about numbers and how they're stored and interpreted by the computer. Oftentimes, most languages will implement two main types of numbers: Integers and Floats.
Integers are whole numbers without the decimal place like
2, 5, 420, 5000 or any other number you can think of.
And Floats are numbers with a decimal place and how precise you want the number to be becomes relevant, these could be things like
97.0, 1.234, 3.14159..., 2.5. Since all data types use space in memory, early implementations of numbers were intentionally done to accommodate most human applications of numbers but not insanely large numbers.
Aside on signed vs unsigned: I've avoided this part so far because it's a detail that matters in some programming languages but not all. In a language that use signed and unsigned integers, they use one bit to allocate this distinction. When an integer is signed, it can be both positive and negative numbers but when it's unsigned, it can only use non-negative numbers (zero and all positive numbers). So for a typical 4 byte integer, it will have 32-bits worth of space to store a number and if it's a signed integer, that would take away one bit to use for the sign (positive or negative) and leave the remaining 31-bits. Not all languages support signed and unsigned integers.
The next data type we'll look at is the string data type. This can be used to store all kinds of characters as long as they're wrapped with quotes, single or double. Examples are: "hello world", "an emoji 😊", and even numbers "1 apple, 2 bananas, 3.14 pies". As humans, we need to use strings pretty often but for some languages like C, strings aren't even available in the standard library, only a
char data type is available which is a single-character string.