Why does C have the best file API
Article URL: https://maurycyz.com/misc/c_files/
Comments URL: https://news.ycombinator.com/item?id=47209788
Points: 20
Comments: 6
Why does C have the best file API?
2026-02-28 (Programming) (Rants)
Ok, the title is a bit tongue-in-cheek, but there's very little thought put into files in most languages.
What you get is usually the same, or often worse, than C:
just read(), write() and some kind of serialization library.
What you don't usually get is accessing files exactly the same as data in memory:
#include <sys/mman.h>
#include <stdio.h>
#include <stdint.h>
#include <fcntl.h>
#include <unistd.h>
void main() {
// Create/open a file containing 1000 unsigned integers
// Initialized to all zeros.
int len = 1000 * sizeof(uint32_t);
int file = open("numbers.u32", O_RDWR | O_CREAT, 0600);
ftruncate(file, len);
// Map it into memory.
uint32_t* numbers = mmap(NULL, len,
PROT_READ | PROT_WRITE, MAP_SHARED,
file, 0);
// Do something:
printf("%d\n", numbers[42]);
numbers[42] = numbers[42] + 1;
// Clean up
munmap(numbers, len);
close(file);
Memory mapping isn't the same as loading a file into memory:
It still works if the file doesn't fit in RAM.
Data is loaded as needed, so it won't take all day to open a terabyte file.
It works with all datatypes and is automatically cached.
This cache is cleared automatically if the system needs memory for something else.
However, in other most languages,
you have to read() in tiny chunks, parse, process, serialize and finally write() back to the disk.
This works, but is verbose and needlessly limited to sequential access:
Computers haven't used tape for decades.
If you're lucky enough to have memory mapping, it will be limited to byte arrays,
which still require explicit parsing/serialization.
It ends up being just a nicer way to call read() and write()
Considering that most languages already support custom allocators, getter functions and the such, adding a better way to access files seems very doable...
but (as far as I'm aware) C is the only language that lets you specify a binary format and just use it.
C's implementation isn't even very good:
Memory mapping comes some overhead (page faults, TLB flushes) and C does nothing to handle endianness or errors —
but it doesn't take much to beat nothing.
Sure, you might want to do some parsing and validation, but this shouldn't be required every time data leaves the disk.
It's very common to run out of memory, which makes it impossible to just parse everything into RAM.
Being able to offload data without complicating the code is very useful.
Just look at Python's pickle:
it's a completely insecure serialization format.
Loading a file can cause code execution even if you just wanted some numbers...
but still very widely used because it fits with the mix-code-and-data model of python.
A lot of files are not untrusted.
File manipulation is similarly neglected.
The filesystem is the original NoSQL database, but you seldom get more then a wrapper around C's readdir().
This usually results in people running another database, such as SQLite, on top of the filesystem,
but relational databases never quite fit your program.
... and SQL integrates even worse than files:
On top of having to serialize all your data, you have to write code in a whole separate language just to access it!
Most programmers will use it as a key-value store, and implement their own indexing:
creating a bizarre triple nested database.
So to answer the title,
I think it's a result of a bad assumption:
That data being read from a file is coming from somewhere else and needs to be parsed...
and that data being written to disk is being sent somewhere and needs to be serialized into a standard format.
This simply isn't true on memory constrained systems —
and with 100 GB files —
every system is memory constrained.