Hex Converter Tool

Convert between hexadecimal, decimal, binary, octal, and ASCII text — instantly, with bit-level visualization and batch processing.

20 min read · 3956 words

Last tested: March 2026 • Based on our testing across major browsers and platforms

Number Base Converter

Enter a value in any field and the others update instantly. Supports arbitrarily large numbers via BigInt.

Hex Color Preview

Text ↔ Hex Converter

Convert between plain text and its hexadecimal representation. Supports multi-byte UTF-8 encoding for international characters and emoji.

Encoding: Separator:

Batch Converter

Convert multiple values at once. Put one value per line.

From: To:

Common Hex Values Reference Table

Key hexadecimal values every developer should know. Click any value to load it in the converter above.

HexDecimalBinaryDescription

The Complete Guide to Hexadecimal: Understanding Number Bases in Computing

Hexadecimal is everywhere in computing, yet many developers I've met treat it as an opaque notation they copy-paste without fully understanding. If you've ever wondered why memory addresses look like 0x7FFF5FBFF8A0, why colors are written as #FF5733, or why your debugger shows 0xDEADBEEF, this guide will give you a thorough understanding. I built this tool after years of needing a fast, reliable converter that doesn't bloat the page with ads or require signups. Based on original research into how browsers handle number parsing and our testing methodology across major platforms, I've verified every conversion algorithm against known-correct reference implementations.

Why Hexadecimal Exists

Computers operate in binary (base-2), but binary is terrible for human readability. The number 255 in binary is 11111111 — already eight digits for a single byte. A 32-bit memory address in binary would be 32 digits long, practically impossible to read or compare at a glance. Decimal (base-10) is what humans naturally use, but it doesn't align cleanly with binary's power-of-two structure.

Hexadecimal (base-16) solves this elegantly. Because 16 is a power of 2 (2^4), each hex digit maps to exactly 4 binary digits (one "nibble"). This means a single byte (8 bits) is always exactly two hex digits, a 16-bit word is four hex digits, and a 32-bit value is eight hex digits. The mapping is clean, reversible, and fast to parse mentally once you know the digit values: A=10, B=11, C=12, D=13, E=14, F=15.

This isn't just academic — it's practical. When you're reading a hex dump of network packets, inspecting memory in a debugger, or parsing binary file formats, hexadecimal is orders of magnitude more readable than binary while maintaining the direct mapping to underlying bytes. You won't catch a single-bit error by staring at decimal, but in hex, the affected nibble is immediately obvious.

Number Base Fundamentals

Before diving deeper, let's make sure the foundation is solid. A number base (or radix) defines how many unique digits are used and the positional weighting of each digit. In any base b, the digit in position n (counting from 0 on the right) represents a value multiplied by b^n.

Binary (Base 2) uses digits 0 and 1. Each position is a power of 2: ..., 8, 4, 2, 1. The binary number 1101 equals 1×8 + 1×4 + 0×2 + 1×1 = 13 in decimal. Binary is the native language of digital circuits — each bit corresponds to a voltage level (high/low, on/off). Everything your computer does, from rendering this page to running an AI model, ultimately reduces to billions of binary operations per second.

Octal (Base 8) uses digits 0-7. Each octal digit represents exactly 3 bits. Octal was historically important in computing — early PDP-8 and PDP-11 minicomputers used octal extensively, and Unix file permissions still use it today (chmod 755 means owner rwx, group r-x, others r-x). You'll also see octal in C/C++ string escapes: \033 is the ESC character (27 in decimal). It's less common today but still important to understand.

Decimal (Base 10) uses digits 0-9. It's the natural human counting system (probably because we have 10 fingers), and it's what you see in user-facing interfaces, financial calculations, and everyday life. In computing, decimal is used for display but rarely for internal representation — most CPUs don't have native decimal arithmetic, so libraries convert to and from binary internally.

Hexadecimal (Base 16) uses digits 0-9 and A-F. As we discussed, its 4-bit alignment makes it ideal for representing binary data compactly. The prefix 0x in programming languages (C, Java, JavaScript, Python) signals a hex literal: 0xFF equals 255. CSS uses the # prefix for hex colors: #00FF88. Assembly language and hardware documentation use hex almost exclusively.

Conversion Algorithms: How It Works Under the Hood

Understanding the conversion algorithms helps you verify results and debug edge cases. Here's how each conversion works, which is exactly what this tool implements in JavaScript:

Hex to Decimal: Multiply each digit by its positional power of 16, then sum. For 1A3: 1×16² + 10×16¹ + 3×16° = 256 + 160 + 3 = 419. For large numbers, this tool uses JavaScript's BigInt type, which handles arbitrarily large integers without the precision loss you'd get from standard floating-point Number (which is only safe up to 2^53 - 1, or 9,007,199,254,740,991).

Decimal to Hex: Repeatedly divide by 16, collecting remainders. For 419: 419 / 16 = 26 remainder 3, 26 / 16 = 1 remainder 10 (A), 1 / 16 = 0 remainder 1. Reading remainders bottom-to-top gives 1A3. In JavaScript, you can use n.toString(16) for numbers within safe integer range, or implement the algorithm manually for BigInt values.

Hex to Binary: The simplest conversion — replace each hex digit with its 4-bit binary equivalent. F→1111, A→1010, 3→0011. So FA3 = 1111 1010 0011. This direct mapping is why hex exists. No multiplication or division needed — it's a pure lookup table operation. This is also why this tool's bit visualization is so useful: you can see exactly which bits are set for any hex value.

Text to Hex: Each character has a numeric code point. For ASCII (7-bit), characters map directly to values 0-127. The letter 'A' is 65 (0x41), 'a' is 97 (0x61), '0' is 48 (0x30), space is 32 (0x20). For Unicode text, the situation is more complex because characters can require multiple bytes in UTF-8 encoding. A single emoji like the "face with tears of joy" is code point U+1F602, which encodes to four UTF-8 bytes: F0 9F 98 82. This tool handles all of these cases correctly.

Hexadecimal in Everyday Programming

Let's look at where you'll encounter hex in real programming work, because the practical applications are what matter most.

CSS Colors: The most familiar use of hex for web developers. A CSS hex color like #FF5733 encodes three channels — Red (FF=255), Green (57=87), Blue (33=51). The 3-digit shorthand #F53 expands each digit: #FF5533. Modern CSS also supports 8-digit hex with alpha: #FF573380 is approximately 50% transparent. When you see a hex value that's 3, 4, 6, or 8 digits and starts with a letter or contains A-F, it's probably a color — which is why this tool automatically shows a color preview for such inputs.

Memory Addresses and Pointers: When debugging in C, C++, or any systems language, you'll see memory addresses in hex. A typical x86-64 virtual address looks like 0x7FFF5FBFF8A0. The hex format makes it easy to identify memory regions — stack addresses typically start with 0x7F, heap allocations are in the lower range, and kernel space starts at 0xFFFF on 64-bit systems. You can't usefully interpret these in decimal.

Network Protocols: MAC addresses (00:1A:2B:3C:4D:5E), IPv6 addresses (2001:0db8:85a3::8a2e:0370:7334), and packet hex dumps all use hexadecimal. When analyzing network traffic with Wireshark or tcpdump, you're reading hex. Each byte in a packet is two hex digits, making it easy to identify protocol fields, flags, and payload data.

Cryptographic Hashes: SHA-256 hashes, MD5 digests, and other cryptographic outputs are displayed as hex strings. A SHA-256 hash like e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 (the hash of an empty string) is 64 hex characters representing 32 bytes (256 bits). The hex format makes it easy to compare hashes visually and to detect even small differences.

Unicode and Character Encoding: Unicode code points are written in hex: U+0041 is 'A', U+00E9 is 'e', U+1F600 is a grinning face emoji. When you see garbled text (mojibake) on a web page, examining the hex bytes of the source file can quickly reveal encoding mismatches — for example, a UTF-8 encoded file being read as Latin-1 will show characteristic byte patterns in hex that immediately identify the problem.

Binary File Formats: File format specifications (PNG, ZIP, PDF, ELF) define their structure in hex. A PNG file starts with the magic bytes 89 50 4E 47 0D 0A 1A 0A. A ZIP file starts with 50 4B 03 04 (PK followed by version bytes). A PDF starts with 25 50 44 46 (%PDF in ASCII). Knowing these signatures lets you identify corrupted or mislabeled files by examining their first few bytes in a hex editor.

JavaScript's Number Handling: Pitfalls and Solutions

JavaScript's Number type is an IEEE 754 double-precision floating-point value. This gives you 53 bits of integer precision, meaning you can safely represent integers up to 2^53 - 1 (9,007,199,254,740,991 or about 9 quadrillion). Beyond that, you start losing precision — Number(0x20000000000001) silently rounds to 9007199254740992.

For hex values larger than 13 digits (52+ bits), this tool switches to BigInt, which provides arbitrary-precision integer arithmetic. BigInt literals use the n suffix: 0xDEADBEEFDEADBEEFn. The trade-off is that BigInt operations are slower than Number operations and can't be mixed with regular Numbers without explicit conversion. But for a converter tool, correctness matters more than speed, and I've optimized the hot paths to minimize BigInt usage when it isn't needed.

One gotcha that I've seen trip up even experienced developers: parseInt('0xFF') correctly returns 255, but parseInt('FF') without the prefix returns NaN unless you specify radix 16: parseInt('FF', 16). Always pass the radix parameter to parseInt — it's one of JavaScript's most common footguns. This tool strips common prefixes (0x, 0b, 0o, #) before parsing to handle any format you throw at it.

Bit-Level Operations: Understanding What Hex Represents

The bit visualization in this tool isn't just a visual flourish — it's genuinely useful for understanding bitwise operations, flag fields, and binary protocols. When you enter a hex value, you can see exactly which bits are set (1) and which are clear (0), grouped into nibbles (4-bit groups) for readability.

Consider the Unix file permission 0755 (octal). In binary, that's 111 101 101. The first group (111) means owner has read+write+execute. The second (101) means group has read+execute but not write. The third (101) is the same for others. Seeing this bit pattern makes the permission immediately understandable in a way that the raw octal number doesn't convey to beginners.

Bitwise operations are fundamental to low-level programming. AND (&) masks bits, OR (|) sets bits, XOR (^) toggles bits, NOT (~) inverts all bits. For example, to check if bit 3 is set in a value: if (value & 0x08) — because 0x08 in binary is 00001000, with only bit 3 set. The AND operation clears all other bits, returning nonzero only if bit 3 was set. Understanding this requires seeing the hex-to-binary mapping, which is exactly what the bit visualization provides.

Hex in Security and Reverse Engineering

Security researchers and reverse engineers live in hexadecimal. Malware analysis involves examining executable files in hex editors, looking for suspicious strings, encoded payloads, and shellcode. A classic x86 NOP sled (used in buffer overflow exploits) shows up as a long string of 90 bytes. The CC byte is the INT 3 breakpoint instruction, often used to detect debuggers.

Hex encoding is also used as a simple obfuscation technique. URL-encoded characters use percent-hex notation: %20 for space, %3C for <. SQL injection attacks often use hex-encoded strings to bypass naive input filters: 0x27 instead of a literal single quote. Understanding hex encoding and decoding is essential for web application security testing.

Working with Hex Colors: Beyond Basic Conversion

Since hex color codes are one of the most common uses of hexadecimal in web development, this tool automatically detects and previews them. When you enter a value that looks like it could be a color (3, 4, 6, or 8 hex digits), the color preview panel activates, showing you the rendered color along with its RGB and HSL equivalents.

A few non-obvious facts about hex colors that don't get enough attention. First, the perceptual brightness of a color isn't simply the average of its RGB channels. Green contributes far more to perceived brightness than red or blue (the human eye has more green-sensitive cones). The relative luminance formula weights the channels: 0.2126*R + 0.7152*G + 0.0722*B. This is why #00FF00 (pure green) appears much brighter than #0000FF (pure blue) despite both having a single channel at max.

Second, interpolating between hex colors in RGB space produces muddy midpoints. Blending #FF0000 (red) and #00FF00 (green) through RGB gives you #808000 — a murky olive, not the vibrant yellow you'd expect. Perceptually uniform color spaces like OKLCH produce much better gradients. Modern CSS color-mix() lets you choose the interpolation space, and choosing oklch almost always gives better results than srgb for intermediate colors.

Performance and Implementation Notes

This converter is designed for speed. All conversions happen in real-time as you type, with no debouncing delay. For typical values (up to 53-bit integers), the conversions use JavaScript's native parseInt() and toString() methods, which are implemented in optimized C++ in all major browser engines and execute in nanoseconds. For larger values, the BigInt fallback adds some overhead but remains fast enough to be imperceptible.

The bit visualization dynamically creates DOM elements for each bit, grouped into nibbles. For very large numbers (hundreds of bits), this could become a performance concern, so the visualization is capped at 64 bits. Beyond that, the hex and binary text representations are still accurate — you just don't get the visual bit display. This is a pragmatic trade-off: if you're working with 256-bit values, you probably don't need to see all 256 individual bits.

The tool scores well on Google PageSpeed because it has zero external JavaScript dependencies, minimal CSS, and no layout shifts. Everything loads in a single HTML file with inline CSS and JS. The only external resources are Google Fonts (Inter) and the embedded media content, both loaded with loading="lazy" to avoid blocking the initial render.

Common Hex Patterns Every Developer Should Recognize

Over time, you develop an intuition for common hex values. Here are the ones that I've found come up most frequently across different domains:

Endianness: Byte Order Matters

When hex values represent multi-byte data, byte order (endianness) becomes critical. Big-endian stores the most significant byte first — the hex value 0x12345678 is stored as bytes 12 34 56 78 in memory. Little-endian reverses the byte order: 78 56 34 12. Intel x86/x64 processors use little-endian, while network protocols use big-endian (hence "network byte order").

This has real consequences. If you read a 32-bit integer from a binary file without accounting for endianness, you'll get the wrong value. The hex 0x01000000 in big-endian is 16,777,216 but in little-endian it reads as 0x00000001 = 1. If you've ever been confused by "why is this value completely wrong when I read it from a file?", endianness is often the answer. This doesn't affect this converter tool (which deals with numeric values, not raw byte sequences), but it's essential knowledge when working with hex data from files or network captures.

Hex in Modern Web Development

Beyond colors, hex shows up in several modern web development contexts. Content Security Policy (CSP) nonces are typically hex-encoded random values. Subresource Integrity (SRI) hashes, while usually base64-encoded, are sometimes shown in hex for debugging. WebSocket frames include hex-encoded opcodes and masks. The Web Crypto API returns ArrayBuffers that you'll often convert to hex for display or storage.

Here's a useful snippet for converting an ArrayBuffer to a hex string in JavaScript:

function bufferToHex(buffer) {
  return Array.from(new Uint8Array(buffer))
    .map(b => b.toString(16).padStart(2, '0'))
    .join('');
}

// Example: SHA-256 hash as hex
async function sha256Hex(message) {
  const encoded = new TextEncoder().encode(message);
  const hash = await crypto.subtle.digest('SHA-256', encoded);
  return bufferToHex(hash);
}

This pattern appears in virtually every web application that deals with cryptographic operations, file checksums, or binary protocol implementations. The padStart(2, '0') is crucial — without it, bytes like 0x0A would be rendered as just a instead of 0a, corrupting the output. It's a subtle bug that I've seen in production code more times than I can count.

With the fundamentals covered in this guide and the converter tool above for instant verification, you should be well-equipped to work confidently with hexadecimal in any context. Whether you're debugging a memory issue, crafting a color palette, analyzing network traffic, or just trying to figure out what 0xCAFEBABE means in a Java class file, understanding hex is a skill that pays dividends throughout your career. Bookmark this page — between the converter, the bit visualization, and the batch processing features, it covers the vast majority of hex conversion needs you'll encounter in day-to-day development.

Number Base Usage in Programming Languages (Survey Data)

Understanding Hexadecimal

This video explains the hexadecimal number system with practical examples and visual demonstrations.

Frequently Asked Questions

What is hexadecimal and why is it used in computing?

Hexadecimal (base-16) uses digits 0-9 and letters A-F to represent values. It's used because each hex digit maps to exactly 4 binary bits (a nibble), making it a compact and human-readable representation of binary data. One byte = two hex digits. This clean mapping makes hex ideal for memory addresses, color codes, and binary file inspection.

How do I convert hexadecimal to decimal manually?

Multiply each hex digit by its positional power of 16 and sum the results. For example, hex 1A3 = (1 x 256) + (10 x 16) + (3 x 1) = 256 + 160 + 3 = 419 decimal. Remember: A=10, B=11, C=12, D=13, E=14, F=15.

What is the difference between hex, decimal, binary, and octal?

They're different number bases: binary (base-2, digits 0-1), octal (base-8, digits 0-7), decimal (base-10, digits 0-9), hexadecimal (base-16, digits 0-F). The number 255 equals FF in hex, 11111111 in binary, and 377 in octal. They all represent the same underlying value.

How do I convert text to hexadecimal?

Convert each character to its numeric code point, then to hex. ASCII 'A' = 65 decimal = 41 hex. For multi-byte UTF-8 characters, encode the full byte sequence. This tool handles ASCII, UTF-8, and UTF-16 encodings automatically.

Why do hex color codes use 6 digits?

Hex colors encode three 8-bit channels (Red, Green, Blue) as 2 hex digits each, totaling 6. #FF5733 means Red=FF(255), Green=57(87), Blue=33(51). Modern CSS also supports 8-digit hex with alpha transparency, and 3-digit shorthand where each digit is doubled.

What are common hex values every developer should know?

0xFF (255, max byte), 0x100 (256), 0xFFFF (65535, max 16-bit), 0x7FFFFFFF (max signed 32-bit), 0xDEADBEEF (debug marker), 0x0A (newline), 0x0D (carriage return), 0x20 (space), 0x00 (null). Recognizing these on sight speeds up debugging significantly.

Can this tool handle large hexadecimal numbers?

Yes. For values within JavaScript's safe integer range (up to 2^53-1), it uses standard Number arithmetic. For larger values, it automatically switches to BigInt, supporting arbitrarily large hex numbers with full precision. You can convert 256-bit hashes, 128-bit UUIDs, and beyond.

External Resources

Browser Compatibility

This tool uses standard JavaScript APIs including BigInt. Verified last tested March 2026.

FeatureChrome 131+Firefox 128+Safari 17+Edge 131+
BigInt ArithmeticFullFullFullFull
Clipboard API (copy)FullFullFullFull
TextEncoder/DecoderFullFullFullFull
CSS Custom PropertiesFullFullFullFull
ES2020+ SyntaxFullFullFullFull
localStorageFullFullFullFull

Tested on Chrome 130, Chrome 131, Firefox, Safari, and Edge. Meets Google PageSpeed performance benchmarks. No polyfills required for any supported browser.

About This Tool

This hex converter was built by Michael Lip as a fast, reliable, and privacy-first tool for developers, students, and security researchers. It runs 100% client-side in your browser -- no data is ever sent to any server. All conversions happen instantly using JavaScript's native BigInt for precision.

The tool supports bidirectional conversion between hexadecimal, decimal, binary, octal, and ASCII text, with additional features like bit visualization, hex color preview, and batch processing. It was designed to eliminate the need for ad-heavy, login-gated converter websites.

Michael Lip maintains this tool and tests it regularly across Chrome, Firefox, Safari, and Edge to ensure accuracy and compatibility. If you find this tool useful, consider sharing it with fellow developers.

Quick Facts