• caglararli@hotmail.com
  • 05386281520

multiple machines sync without single point of failure

Çağlar Arlı      -    13 Views

multiple machines sync without single point of failure

I use various Linux machines where I like to sync some config and other important files. This is a security risk, as an intruder on one machine could easily modify some script that would be propagated to the other machines automatically.

To do this, in the past, I've used two different methods:

  • sync a folder to all of them (think Dropbox, I actually used Dropbox for a while and then found better methods).
  • Use Git to version control those files.

Folder sync is, by far, the easier to use. You just modify a file, and it appears modified in the other machines. Git is more formal, but much more of a headache to maintain. You have to be conscious of what modifications you've made and pull your changes when you enter a new machine.

At simple glance, you could consider the Git solution to be safer than folder sharing (ignoring the encryption strength of both solutions), because it eliminates automatic propagation from a possibly hacked machine.

That is, until you consider the question with a bit more depth:

  • The Git repository, wherever it is, continues to be a single point of failure. Whoever gains access to the Git repository can poison all your machines.
  • If you do Git manually, you expose yourself to various problems, including having different versions of the same files in different machines, etc. If, OTOH, you automate the Git process, it is not any safer than the folder sync method.

This brings me to think the right method, whichever it is, must be using PK signatures for updates, protecting against forged files. The single point of failure would continue to exist, the ability to sign with the proper key, but would be harder to attack, specially if using a hardware key.

As I'm not a security expert, I know my train of thoughts is probably wrong. Please share your ideas on how to do it.