Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The 64bit issue is certainly an issue, but very much overblown.

First of all, in systemd, which is a heavy D-Bus user, we effectively IRL only send integers > 2^53 when we use UINT64_MAX as a special "niche" marker meaning "unlimited"/"unset"/"undefined". But the thing is that JSON has a proper concept for this: the "null" value (or simply not specifying a specific JSON field). Hence, in reasonably clean APIs this is mostly a non-issue.

That said, sd-varlink/sd-json (i.e. systemd's implementation of it), of course is 64bit signed and unsigned integer clean when it processes JSON. More-over it automatically handles if you do what the various specs on the internet suggest you do if you have an integer > 2^53: you encode it as decimal value as a string.

Would it be better if JSON would have been more precise on this, yes. Is it a big issue? No, not at all.



> if you do what the various specs on the internet suggest you do if you have an integer > 2^53: you encode it as decimal value as a string.

Hexadecimal is better - more compact, much less encoding/decoding effort. Alternatively, array of two integers does as well.

Text forms for any error, not NaN, are also useful.

But issues with non-UTF-8 encodings - as in CJK world - may spoil the picture pretty more.


Are strings used for timestamps? Those are usually why I'm sending 64-bit ints over IPC.


If you want more than 1-second precision, 64 bits are not enough. (Hmm does all C++ std::chrono implementation utilize 128-bit integers for nanosecond precision?)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: