• caglararli@hotmail.com
  • 05386281520

Minimizing trust assumptions in Messaging Protocols

Çağlar Arlı      -    10 Views

Minimizing trust assumptions in Messaging Protocols

As I was listening to an interview yesterday, the journalist claimed that his Signal communications were being spied on by the NSA. Whether to believe him or not is subjective. Still, it is an objective truth that nation-states can break into your devices by installing malware or cooperating with the operating system runner. My mental exercise of the day is to understand how to architect a system that minimizes today's trust assumptions when using E2E encryption applications such as Signal. I draw much of the inspiration from this question

I believe that these are the trust assumptions (please correct me if I'm wrong or missing something):

  1. Trust that the encryption algorithm is not broken.
  2. Trust in Signal's Software Integrity. You trust that the signal app is running the open-source software they published
  3. Trust in the underlying operating system integrity. If the operating system is closed source, which is the case for iOS and Android, you need to trust that there is no backdoor in the security system. For example, a backdoor that allows Apple to access all the data in your phone. If the operating system is open source, you can compile and run the code of the operating system autonomously and, by looking at the code, analyze if there's any backdoor. So, this trust assumption is removed if the OS is open source.
  4. Trust in the underlying operating system's soundness. The user needs to trust that the OS is not vulnerable to attacks from third parties. Again, if the code is closed-source, the trust is at its maximum. If the code is open source, at least there's some degree of auditability from the user's perspective. Anyway, it's unreal to guarantee that an operating system is bug-free and not vulnerable to any type of attack.
  5. Trust in the hardware on which the code is running. Some examples related to hardware risks mentioned here like an AES chip that implements the algorithm incorrectly or the hardware RNG on a phone in such a way that it makes it much easier for someone to break your encryption. This also includes attacks by the chipmaker, as described here], that can act in bad faith like installing embedded hardware debugger that listens on embedded cables, can host a server and use NIC for internet access. Chipmakers can also install backdoor in primary bootloader which boots SoC and act as root of trust. From there secure boot flow can be compromised down to the OS.

The solution that I came up with is to use an isolated device to 1) write the message 2) perform the encryption/decryption phase. In this way, we are moving the trust assumption from the phone to the isolated device, I know it, but bear with me. The system will go as follows:

  1. On the isolated device, Alice generates her private key. Bob does the same on his side
  2. Using their phone, Alice and Bob exchange their public keys over an untrusted channel such as Signal chat.
  3. On the isolated device, Alice generates the shared secret between her and Bob, types her message and encrypt it using the shared secret
  4. Using their phone, Alice sends the encrypted message over an untrusted channel such as Signal chat.
  5. On the isolated device, Bob generates the shared secret between him and Alice and uses it to decrypt the message.

We assume that:

  • The isolated device is for example BeagleBone Motherboard with open-source hardware architecture and open-source Linux operating system
  • The isolated device never ever connects to any other network such that in order to go from 3 to 4 Alice has to manually type the encrypted message generated by isolated device into their phone.

My question is: in the second scenario that I describe, how would the five trust assumptions listed in scenario one would change? Can some of them be totally removed? Or at least minimized?