Back to blog

Guide

How to handle CSVs with special characters

Accents, symbols, and emoji in your data don't have to break your workflow.

Jan 13, 20254 min read
Names with accents, currency symbols, and yes—even emoji—are increasingly common in data exports. Here's how to keep them intact.

Understand the problem

Special characters need more than one byte to store. When software assumes single-byte encoding, multi-byte characters get corrupted or replaced with question marks.

  • ASCII covers only English letters and basic symbols
  • UTF-8 handles virtually all world characters
  • Encoding mismatches cause mojibake (garbled text)

Always use UTF-8

When exporting or saving CSVs, choose UTF-8 encoding. It's the modern standard and handles everything from é to 日本語 to 🎉.

  • UTF-8 is backwards compatible with ASCII
  • Most modern systems default to UTF-8
  • Legacy systems may need explicit conversion

Quick CTA

Special characters just work

Readable CSV handles UTF-8 natively. Your data displays correctly without configuration.

Try it

Test with real data

Before processing a large file, test your workflow with a few rows containing special characters. If they survive, your pipeline is safe.

  • Include accented names in test data
  • Test currency symbols: £ € ¥
  • Verify after each transformation step

Fix corrupted characters

If damage is done, you may be able to recover by re-reading the file with the correct encoding. Iconv and similar tools can convert between encodings.

  • Identify the original encoding first
  • Use iconv -f ORIGINAL -t UTF-8
  • Some corruption is unrecoverable

Key takeaway

UTF-8 is your friend. Use it everywhere and special characters stop being special.