Understanding Number Base Systems and Digital Conversions
Number base systems, also called numeral systems or radix systems, are the foundation of how computers store, process, and communicate every piece of data you interact with daily. When you type a character on your keyboard, view a pixel on your screen, or send a packet across a network, the underlying representation is a sequence of digits in a particular base. This tool gives you instant, synchronized conversion between the most common bases (binary, octal, decimal, and hexadecimal) and supports arbitrary bases from 2 to 36, along with specialized views for bit-level operations, signed integers, and floating-point representations.
The Language of Hardware
Binary (base 2) uses only two digits: 0 and 1. Every digital circuit, from the simplest logic gate to the most complex CPU, operates on binary signals. A single binary digit is called a bit, and bits are grouped into bytes (8 bits), words (16, 32, or 64 bits), and larger units. Understanding binary is essential for anyone working with low-level programming, embedded systems, network protocols, or cryptography. The bit visualization panel in this tool lets you see and click individual bits, toggle between 8-bit, 16-bit, and 32-bit widths, and immediately see the decimal equivalent update in real time.
The Legacy Shorthand
Octal (base 8) groups binary digits into sets of three (since 2 raised to the third power equals 8). Historically, octal was used in early computing systems including PDP-8 and Unix file permissions. The Unix chmod command still uses octal notation: 755 means the owner can read, write, and execute (7 = 111 in binary), while the group and others can read and execute (5 = 101). Although hexadecimal has largely replaced octal in modern contexts, octal remains relevant in Unix/Linux system administration and some legacy codebases.
Human-Native Numbering
Decimal (base 10) is the number system humans grow up using, almost certainly because we have ten fingers. It uses digits 0 through 9 and is the default representation in virtually every programming language, spreadsheet, and calculator. When you read a memory address, file size, or network port number in a log file or user interface, it is typically displayed in decimal. This tool uses decimal as the bridge between all other bases: type a decimal value and instantly see its binary, octal, hexadecimal, and custom-base equivalents.
The Developer's Favorite
Hexadecimal (base 16) uses digits 0-9 and letters A-F to represent values from 0 to 15. Each hex digit corresponds to exactly four binary bits, making hex a compact and readable shorthand for binary data. Memory addresses, color codes (#FF5733), MAC addresses (00:1A:2B:3C:4D:5E), and cryptographic hashes are all conventionally displayed in hex. A single byte (8 bits) is always exactly two hex digits, which is why hex dumps display data in pairs. This predictable relationship between hex and binary is what makes hexadecimal indispensable in software development, reverse engineering, and digital forensics.
Beyond the Common Four
While binary, octal, decimal, and hexadecimal cover the vast majority of practical use cases, there are legitimate reasons to work with other bases. Base 3 (ternary) appears in balanced ternary computer research and some encoding schemes. Base 36 uses digits 0-9 and letters A-Z, maximizing information density for alphanumeric-only representations and is commonly used for URL shorteners and compact identifiers. Base 32 (using digits and a subset of letters) underlies encoding schemes like Crockford's Base32 and is used in TOTP authentication tokens. This tool supports any base from 2 to 36, updating the custom base field in real time as you change the base selector.
Representing Negative Integers
In most modern computer architectures, signed integers are stored using two's complement representation. The most significant bit serves as the sign bit: 0 for positive, 1 for negative. To negate a number in two's complement, you invert all bits and add 1. This scheme has elegant properties: there is only one representation of zero (unlike sign-magnitude or ones' complement), and addition works identically for signed and unsigned integers at the hardware level, simplifying CPU design. The two's complement panel in this tool shows you the binary and hex representations for any signed integer, along with the range limits for your chosen bit width (8, 16, or 32 bits).
IEEE 754 Floating-Point Representation
Floating-point numbers follow the IEEE 754 standard, which divides a binary representation into three fields: a sign bit, an exponent field, and a mantissa (significand) field. For 32-bit single precision, the layout is 1 sign bit, 8 exponent bits, and 23 mantissa bits. For 64-bit double precision, it is 1 sign bit, 11 exponent bits, and 52 mantissa bits. The exponent uses a biased encoding (bias of 127 for single, 1023 for double), and the mantissa represents the fractional part of a normalized binary number with an implicit leading 1. The IEEE 754 viewer in this tool color-codes each field (red for sign, yellow for exponent, green for mantissa) so you can visually understand how any decimal float maps to its binary representation.
Numbers as Characters
At the most fundamental level, every character you see on screen is a number. ASCII (American Standard Code for Information Interchange) defines 128 characters, mapping integers 0-127 to control codes, digits, uppercase letters, lowercase letters, and punctuation. Unicode extends this to over 143,000 characters across 154 scripts, using code points from U+0000 to U+10FFFF. The ASCII/Unicode converter in this tool lets you type text and see each character's code point in decimal and hex, or enter code points and see the resulting characters. This is invaluable for debugging encoding issues, crafting escape sequences, and understanding character sets.
Community Questions
- How to convert between number bases in JavaScript? 14 answers · tagged: javascript, base-conversion, math
- parseInt and toString for base conversion? 11 answers · tagged: javascript, parseint, conversion
- How does octal and hexadecimal notation work? 8 answers · tagged: number-systems, octal, hexadecimal
Frequently Asked Questions
Hacker News Discussions
- Ask HN: Does YC Reserve the Right to Participate in Future Rounds? 8 points · 2 comments
- Ask HN: How should we handle 100k convertible notes while shutting down? 13 points · 4 comments
- Ask HN: Is YC funding sanctions evasion? The case of Kontigo Inc in Venezuela 11 points · 3 comments
Source: Hacker News
Research Methodology
This number base converter tool was built after analyzing search patterns, user requirements, and existing solutions. We tested across Chrome, Firefox, Safari, and Edge. All processing runs client-side with zero data transmitted to external servers. Last reviewed March 19, 2026.
Performance Comparison
Benchmark: processing speed relative to alternatives. Higher is better.
PageSpeed Performance
Measured via Google Lighthouse. Single HTML file with zero external JS dependencies ensures fast load times.
Browser Support
| Browser | Desktop | Mobile |
|---|---|---|
| Chrome | 90+ | 90+ |
| Firefox | 88+ | 88+ |
| Safari | 15+ | 15+ |
| Edge | 90+ | 90+ |
| Opera | 76+ | 64+ |
Tested March 2026. Data sourced from caniuse.com.
Live Stats
1. How do I convert a negative number to binary?
Negative numbers in computers are typically represented using two's complement notation. To convert a negative decimal number to binary: first, write the positive version in binary with the appropriate bit width (8, 16, or 32 bits). Then invert every bit (change 0s to 1s and 1s to 0s). Finally, add 1 to the result. For example, -42 in 8-bit two's complement: 42 = 00101010, inverted = 11010101, plus 1 = 11010110. Use the Two's Complement panel in this tool to perform this conversion instantly for any signed integer.
2. What is the difference between signed and unsigned integers?
An unsigned integer uses all its bits to represent magnitude, so an 8-bit unsigned integer ranges from 0 to 255. A signed integer reserves the most significant bit as a sign indicator, so an 8-bit signed integer (in two's complement) ranges from -128 to +127. Both use the same number of bits, but interpret them differently. In most programming languages, you choose between signed and unsigned types depending on whether you need to represent negative values. The Two's Complement panel shows both interpretations for any input value.
3. Why does 0.1 + 0.2 not equal 0.3 in floating point?
The decimal fractions 0.1 and 0.2 cannot be represented exactly in binary floating point, just as 1/3 cannot be represented exactly in decimal. When your computer stores 0.1, it actually stores the closest representable IEEE 754 value, which is approximately 0.1000000000000000055511151231257827021181583404541015625. The same happens for 0.2. When these imprecise values are added, the result is approximately 0.30000000000000004, not exactly 0.3. Use the IEEE 754 viewer to see the exact bit patterns and understand why this rounding occurs.
4. How do hex color codes work?
A hex color code like #FF5733 encodes three 8-bit values: red (FF = 255), green (57 = 87), and blue (33 = 51). Each pair of hex digits represents one byte (0-255) of color intensity. Some formats add a fourth pair for alpha (transparency), like #FF573380 where 80 hex = 128 decimal = roughly 50% opacity. You can use this converter to break down any hex color component into its decimal equivalent, or convert decimal RGB values into hex for use in CSS and design tools.
5. What bases are commonly used in programming?
The four most common bases in software development are: binary (base 2) for bit manipulation, hardware interfaces, and flag fields; octal (base 8) for Unix file permissions and some legacy systems; decimal (base 10) for human-readable output and general arithmetic; and hexadecimal (base 16) for memory addresses, color codes, byte values, and cryptographic hashes. Base 64 is also widely used for encoding binary data as text (like email attachments and data URIs), though it uses a different encoding scheme than positional notation. Base 36 appears in URL shorteners and compact identifier systems.
6. What is the maximum value for each bit width?
For unsigned integers: 8-bit maximum is 255 (FF hex), 16-bit is 65,535 (FFFF hex), 32-bit is 4,294,967,295 (FFFFFFFF hex), and 64-bit is 18,446,744,073,709,551,615. For signed two's complement integers: 8-bit ranges from -128 to 127, 16-bit from -32,768 to 32,767, 32-bit from -2,147,483,648 to 2,147,483,647, and 64-bit from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. JavaScript uses 64-bit doubles internally, so this tool accurately handles integers up to 2^53 - 1 (Number.MAX_SAFE_INTEGER = 9,007,199,254,740,991).
7. Does this tool send my data to a server?
No. This number base converter runs entirely in your browser using client-side JavaScript. No data is transmitted to any server, API, or third-party service. There are no analytics trackers, cookies, or data collection mechanisms. All conversions happen instantly on your device. The tool works offline once the page has loaded. This makes it safe to use with sensitive numerical data such as memory addresses from proprietary software or internal network configurations.
8. How does the bit visualization work?
The bit visualization panel displays the current decimal value as a row of individual bit cells. Each cell shows either 0 or 1, and you can click any cell to toggle it. When you click a bit, the corresponding decimal value updates immediately, and all four base fields (binary, octal, decimal, hex) synchronize automatically. You can switch between 8-bit, 16-bit, and 32-bit widths using the toggle buttons. Bits are numbered from right to left, with bit 0 being the least significant bit (LSB) and the highest-numbered bit being the most significant bit (MSB). Groups of 4 bits are visually separated for easier reading.