>> Practically, you don't know if local FLOSS software is the same as its released code
> There's strong efforts for reproducible builds and bootstrappable distros in the past years. NixOS and guix have pioneered the field, but i believe both Debian and ArchLinux are now over 90% reproducible.
That's a valuable step, but few people build their OS. In practice, few people will know if their copy matches the released code.
>> seems difficult for hosted cloud software
> Yes, it's outright impossible
I think it may be possible, but it will require different tools. Remote attestation? Can we verify what is running in memory?
> That's a valuable step, but few people build their OS. In practice, few people will know if their copy matches the released code.
This is true! That's why guix project has been working on `guix challenge`, which lets you inquire about checksums for a specific package from various repositories, to make sure they are the same. I personally think such an approach is a huge step in making software somewhat-verifiable by non-tech people (although guix is not exactly approachable by that audience, i'm talking about the principle).
> Remote attestation? Can we verify what is running in memory?
Yes and know. You can if you trust the hardware/software doing the remote attestation. Signal project for instance has argued that a centralized server with Intel SGX "secure enclaves" is the safest we can do. I personally strongly disagree, and i believe vulnerabilities found since then in Apple/Intel/AMD security chips go against that argument.
Software security is almost tackled with bootstrappability, reproducibility and "challenges". Hardware/firmware security is an entirely new thing...
> Signal project for instance has argued that a centralized server with Intel SGX "secure enclaves" is the safest we can do. I personally strongly disagree, and i believe vulnerabilities found since then in Apple/Intel/AMD security chips go against that argument.
If it that isn't 'safest', what do you think is safer (and practical)?
> If it that isn't 'safest', what do you think is safer (and practical)?
Distributing trust is safer and practical. A single, centralized server will always be vulnerable, whatever defenses in depth you deploy. Standardization and decentralization is more valuable in the long run for privacy/security than any customized efforts.
That's what allows us these past years to do PGP over email over Tor onions, transparently via onionMX SRV records (+ a local cache/mapping to prevent lying DNS). Meanwhile, Signal still requires a unique identifier (phone number) to operate and mandates usage of AWS and other privacy-hostile providers to reach their server, and there's nothing we can do about it because they control the entire infrastructure. Some resources on that:
- https://gultsch.de/objection.html <-- A free-software Jabber/XMPP client developer's answer to Signal team's stance against federation and open standards
> There's strong efforts for reproducible builds and bootstrappable distros in the past years. NixOS and guix have pioneered the field, but i believe both Debian and ArchLinux are now over 90% reproducible.
That's a valuable step, but few people build their OS. In practice, few people will know if their copy matches the released code.
>> seems difficult for hosted cloud software
> Yes, it's outright impossible
I think it may be possible, but it will require different tools. Remote attestation? Can we verify what is running in memory?