The chat continues on all linked devices from the point in time that they are linked.
Imagine two people having a face-to-face conversation, then a third person walks up and joins in. The third person doesn’t know what was said before they joined the conversation, but all three continue the conversation from that point on.
Linked devices are like the above example, if two of those people were married and tell each other every conversation they’ve had since their wedding.
Does it work phone to phone? I was under the impression that a backup restore was needed if you wanted previous messages. It’s really an unnecessary security risk to have previous message sync. Someone gets your phone in their hand for 20 seconds, links your device and they get every message you have ever sent? No bueno.
I haven’t actually synced a new phone to Signal, does everything just carry over? I assumed you needed to transfer your account from phone to phone, not just link a new device.
Message logs doesn’t break forward secrecy in a cryptographic sense, retaining original asymmetric decryption keys (or method to recreate them) does. Making history editable would help against that too.
What Signal actually intends is to limit privacy leaks, it only allows history transfer when you transfer the entire account to another device and “deactivate” the account on the first one, so you can’t silently get access to all of somebody’s history
You’re describing something very different - you already have the messages, and you already have them decrypted. You can transfer them without the keys. If someone gets your device, they have them, too.
Whether Signal keeps the encrypted the messages or not, a new device has no way of getting the old messages from the server.
I run a cryptography forum, I know the exact definition of these terms. Message logs in plaintext is very distinct from forward secrecy. What forward secrecy means in particular is that captured network traffic can’t be decrypted later even if you at a later point can steal the user’s keys (because the session used session keys that were later deleted). Retrieving local logs with no means of verifying authenticity is nothing more than a classical security breach.
You can transfer messages as a part of an account transfer on Signal (at least on Android). This deactivates the app on the old device (so you can’t do it silently to somebody’s device)
I would argue that it is not limited to network traffic, it is the general concept that historical information is not compromised, even if current (including long-term) secrets are compromised.
From my comment earlier:
There is no sharing of messages between linked devices - that would break forward secrecy
This describes devices linked to an account, where each is retrieving messages from the server - not a point-to-point transfer, which is how data is transferred from one Android device to another. If a new device could retrieve and decrypt old messages on the server, that would be a breach of the forward security concept.
Once again reminding you that I run a cryptography forum (I’ve done so for one 10 years, I keep up to date on the field) and it’s a term defined by professional cryptographers.
Literally all definitions speak of network traffic and leaked / extracted encryption keys. PFS is about using short term keys that you delete so that they can not leak later.
Backup and sync via a separate mechanism is not a PFS violation. In particular because they’re independent of that same encrypted session. It’s entirely a data retention security issue.
Matrix.org supports message log backup via the server, and does so by uploading encrypted message logs and syncing the keys between clients. You can delete the logs later, or delete your keys, or even push fake logs if you want. It’s still happening outside of the original encrypted session and the adversary can’t confirm what actually was said in the original session.
I don’t know why you think that PFS is broken if a local client has to be breached to recover encrypted data from a cloud backup, but PFS is not broken if a local client has to be breached to recover the same data from the client itself. Literally the only difference is where the data is stored, so either chat logs available to the client break PFS or they do not
You are conflating the concept and the implementation. PFS is a feature of network protocols, and they are a frequently cited example, but they are not part of the definition. From your second link, the definition is:
Perfect forward secrecy (PFS for short) refers to the property of key-exchange protocols (Key Exchange) by which the exposure of long-term keying material, used in the protocol to authenticate and negotiate session keys, does not compromise the secrecy of session keys established before the exposure.
And your third link:
Forward secrecy (FS): a key management scheme ensures forward secrecy if an adversary that corrupts (by a node compromise) a set of keys at some generations j and prior to generation i, where 1 ≤ j < i, is not able to use these keys to compute a usable key at a generation k where k ≥ i.
Neither of these mention networks, only protocols/schemes, which are concepts. Cryptography exists outside networks, and outside computer science (even if that is where it finds the most use).
Funnily enough, these two definitions (which I’ll remind you, come from the links you provided) are directly contradictory. The first describes protecting information “before the exposure” (i.e. past messages), while the second says a compromise at j cannot be used to compromise k, where k is strictly greater than j (i.e. a future message). So much for the hard and fast definition from “professional cryptographers.”
Now, what you’ve described with matrix sounds like it is having a client send old messages to the server, which are then sent to another client. The fact the content is old is irrelevant - the content is sent in new messages, using new sessions, with new keys. This is different from what I described, about a new client downloading old messages (encrypted with the original key) from the server. In any case, both of these scenarios create an attack vector through which an adversary can get all of your old messages, which, whether you believe violates PFS by your chosen definition or not, does defeat its purpose (perhaps you prefer this phrasing to “break” or “breach”).
This seems to align with what you said in your first response, that Signal’s goal is to “limit privacy leaks,” which I agree with. I’m not sure why we’ve gotten so hung up on semantics.
I wasn’t going to address this, but since you brought it up twice, running a forum is not much of a credential. Anyone can start a forum. There are forums for vaxxers and forums for antivaxxers, forums for atheists and forums for believers, forums for vegans and forums for carnivores. Not everyone running these forums is an expert, and necessarily, not all of them are “right.” This isn’t to say you don’t have any knowledge of the subject matter, only that running a forum isn’t proof you do.
If you’d like to reply, you may have the last word.
Neither of these mention networks, only protocols/schemes, which are concepts. Cryptography exists outside networks, and outside computer science (even if that is where it finds the most use).
This is ridiculous rules lawyering and isn’t even done well. Such schemes inherently assume multiple communicating parties. Sure you might not need to have a network but you still have to have distinct devices and a communication link of some sort (because if you have a direct trusted channel you don’t need cryptography)
You’re also wrong about your interpretation.
Here’s how to read it:
At point A both parties create their long term identity keys.
At point B they initiate a connection, and create session encryption keys with a key exchange algorithm (first half of PFS)
At point C they exchange information over the encrypted channel.
At point D the session keys are automatically deleted (second half of PFS)
At point E the long term key of one party is leaked. The contents from B and C can not be recovered because the session key is independent of the long term key and now deleted. This is forward secrecy. The adversary can’t compromise it after the fact without breaking the whole algorithm, they have to attack the clients as the session is ongoing.
This is motivated for example by how SSL3.0 usually was used with a single fixed RSA keypair per server, letting user clients generate and submit session encryption keys - allowing a total break of all communications with the server of that key is comprised. Long term DH secrets were also often later used when they should be single use. Then we moved on to ECDH where generating new session secrets is fast and everybody adopted real PFS.
Yes compromising the key means you often get stuff like the database too, etc. Not the point! If you keep deleting sensitive data locally when you should then PFS guarantees it’s actually gone, NSA can’t store the traffic in their big data warehouse and hope to steal the key later to decrypt what you thought you deleted. It’s actually gone.
And both of the above definitions you quoted means the same as the above.
In any case, both of these scenarios create an attack vector through which an adversary can get all of your old messages, which, whether you believe violates PFS by your chosen definition or not, does defeat its purpose (perhaps you prefer this phrasing to “break” or “breach”).
Playing loose with definitions is how half of all broken cryptographic schemes ended up insecure and broken. Being precise with attack definitions allows for better analysis and better defenses.
Like how better analysis of common attacks on long running chats with PFS lead to “self healing” properties being developed to counter point-in-time leaks of session keys by repeatedly performing key exchanges, better protecting long term keys by for example making sure software like Signal make use of the OS provided hardware backed keystore for it, etc. All of this is modeled carefully and described with precise terms.
Edit: given modern sandbox techniques in phones, most malware and exploits doesn’t survive a reboot. If malware can compromise your phone at a specific time but can’t break the TPM then once you reboot and your app rekeys then the adversary no longer have access, and this can be demonstrated with mathematical proofs. That’s self healing PFS.
Anyone can start a forum.
Fair point, but my cryptography forum (reddit.com/r/crypto) has regulars that include people writing the TLS specifications and other well known experts. They’re hanging around because the forum is high quality, and I’m able to keep quality high because I can tell who’s talking bullshit and who knows their stuff.
The chat continues on all linked devices from the point in time that they are linked.
Imagine two people having a face-to-face conversation, then a third person walks up and joins in. The third person doesn’t know what was said before they joined the conversation, but all three continue the conversation from that point on.
Linked devices are like the above example, if two of those people were married and tell each other every conversation they’ve had since their wedding.
There is no reason why the message sync that works from phone to phone could not be implemented on the desktop client as well.
Does it work phone to phone? I was under the impression that a backup restore was needed if you wanted previous messages. It’s really an unnecessary security risk to have previous message sync. Someone gets your phone in their hand for 20 seconds, links your device and they get every message you have ever sent? No bueno.
You can sync messages from phone to phone. https://support.signal.org/hc/en-us/articles/360007059752-Backup-and-Restore-Messages#android_transfer
I haven’t actually synced a new phone to Signal, does everything just carry over? I assumed you needed to transfer your account from phone to phone, not just link a new device.
Any new client doesn’t get old messages. Phone only allows the possibility of transferring a backup, which desktop doesn’t have.
There is no sharing of messages between linked devices - that would break forward secrecy, which prevents a successful attacker from getting historical messages. See the first bullet of: https://support.signal.org/hc/en-us/articles/360007320551-Linked-Devices
Messages are encrypted per device, not per user (https://signal.org/docs/specifications/sesame/), and forward secrecy is preserved (https://en.m.wikipedia.org/wiki/Forward_secrecy, for the concept in general, and https://signal.org/docs/specifications/doubleratchet/ for Signal’s specific approach).
Message logs doesn’t break forward secrecy in a cryptographic sense, retaining original asymmetric decryption keys (or method to recreate them) does. Making history editable would help against that too.
What Signal actually intends is to limit privacy leaks, it only allows history transfer when you transfer the entire account to another device and “deactivate” the account on the first one, so you can’t silently get access to all of somebody’s history
You’re describing something very different - you already have the messages, and you already have them decrypted. You can transfer them without the keys. If someone gets your device, they have them, too.
Whether Signal keeps the encrypted the messages or not, a new device has no way of getting the old messages from the server.
I run a cryptography forum, I know the exact definition of these terms. Message logs in plaintext is very distinct from forward secrecy. What forward secrecy means in particular is that captured network traffic can’t be decrypted later even if you at a later point can steal the user’s keys (because the session used session keys that were later deleted). Retrieving local logs with no means of verifying authenticity is nothing more than a classical security breach.
You can transfer messages as a part of an account transfer on Signal (at least on Android). This deactivates the app on the old device (so you can’t do it silently to somebody’s device)
I would argue that it is not limited to network traffic, it is the general concept that historical information is not compromised, even if current (including long-term) secrets are compromised.
From my comment earlier:
This describes devices linked to an account, where each is retrieving messages from the server - not a point-to-point transfer, which is how data is transferred from one Android device to another. If a new device could retrieve and decrypt old messages on the server, that would be a breach of the forward security concept.
Once again reminding you that I run a cryptography forum (I’ve done so for one 10 years, I keep up to date on the field) and it’s a term defined by professional cryptographers.
https://www.sectigo.com/resource-library/perfect-forward-secrecy
https://link.springer.com/referenceworkentry/10.1007/978-1-4419-5906-5_90
https://www.sciencedirect.com/topics/computer-science/forward-secrecy
Literally all definitions speak of network traffic and leaked / extracted encryption keys. PFS is about using short term keys that you delete so that they can not leak later.
Backup and sync via a separate mechanism is not a PFS violation. In particular because they’re independent of that same encrypted session. It’s entirely a data retention security issue.
Matrix.org supports message log backup via the server, and does so by uploading encrypted message logs and syncing the keys between clients. You can delete the logs later, or delete your keys, or even push fake logs if you want. It’s still happening outside of the original encrypted session and the adversary can’t confirm what actually was said in the original session.
I don’t know why you think that PFS is broken if a local client has to be breached to recover encrypted data from a cloud backup, but PFS is not broken if a local client has to be breached to recover the same data from the client itself. Literally the only difference is where the data is stored, so either chat logs available to the client break PFS or they do not
You are conflating the concept and the implementation. PFS is a feature of network protocols, and they are a frequently cited example, but they are not part of the definition. From your second link, the definition is:
And your third link:
Neither of these mention networks, only protocols/schemes, which are concepts. Cryptography exists outside networks, and outside computer science (even if that is where it finds the most use).
Funnily enough, these two definitions (which I’ll remind you, come from the links you provided) are directly contradictory. The first describes protecting information “before the exposure” (i.e. past messages), while the second says a compromise at
j
cannot be used to compromisek
, wherek
is strictly greater thanj
(i.e. a future message). So much for the hard and fast definition from “professional cryptographers.”Now, what you’ve described with matrix sounds like it is having a client send old messages to the server, which are then sent to another client. The fact the content is old is irrelevant - the content is sent in new messages, using new sessions, with new keys. This is different from what I described, about a new client downloading old messages (encrypted with the original key) from the server. In any case, both of these scenarios create an attack vector through which an adversary can get all of your old messages, which, whether you believe violates PFS by your chosen definition or not, does defeat its purpose (perhaps you prefer this phrasing to “break” or “breach”).
This seems to align with what you said in your first response, that Signal’s goal is to “limit privacy leaks,” which I agree with. I’m not sure why we’ve gotten so hung up on semantics.
I wasn’t going to address this, but since you brought it up twice, running a forum is not much of a credential. Anyone can start a forum. There are forums for vaxxers and forums for antivaxxers, forums for atheists and forums for believers, forums for vegans and forums for carnivores. Not everyone running these forums is an expert, and necessarily, not all of them are “right.” This isn’t to say you don’t have any knowledge of the subject matter, only that running a forum isn’t proof you do.
If you’d like to reply, you may have the last word.
This is ridiculous rules lawyering and isn’t even done well. Such schemes inherently assume multiple communicating parties. Sure you might not need to have a network but you still have to have distinct devices and a communication link of some sort (because if you have a direct trusted channel you don’t need cryptography)
You’re also wrong about your interpretation.
Here’s how to read it:
At point A both parties create their long term identity keys.
At point B they initiate a connection, and create session encryption keys with a key exchange algorithm (first half of PFS)
At point C they exchange information over the encrypted channel.
At point D the session keys are automatically deleted (second half of PFS)
At point E the long term key of one party is leaked. The contents from B and C can not be recovered because the session key is independent of the long term key and now deleted. This is forward secrecy. The adversary can’t compromise it after the fact without breaking the whole algorithm, they have to attack the clients as the session is ongoing.
This is motivated for example by how SSL3.0 usually was used with a single fixed RSA keypair per server, letting user clients generate and submit session encryption keys - allowing a total break of all communications with the server of that key is comprised. Long term DH secrets were also often later used when they should be single use. Then we moved on to ECDH where generating new session secrets is fast and everybody adopted real PFS.
Yes compromising the key means you often get stuff like the database too, etc. Not the point! If you keep deleting sensitive data locally when you should then PFS guarantees it’s actually gone, NSA can’t store the traffic in their big data warehouse and hope to steal the key later to decrypt what you thought you deleted. It’s actually gone.
And both of the above definitions you quoted means the same as the above.
Playing loose with definitions is how half of all broken cryptographic schemes ended up insecure and broken. Being precise with attack definitions allows for better analysis and better defenses.
Like how better analysis of common attacks on long running chats with PFS lead to “self healing” properties being developed to counter point-in-time leaks of session keys by repeatedly performing key exchanges, better protecting long term keys by for example making sure software like Signal make use of the OS provided hardware backed keystore for it, etc. All of this is modeled carefully and described with precise terms.
Edit: given modern sandbox techniques in phones, most malware and exploits doesn’t survive a reboot. If malware can compromise your phone at a specific time but can’t break the TPM then once you reboot and your app rekeys then the adversary no longer have access, and this can be demonstrated with mathematical proofs. That’s self healing PFS.
Fair point, but my cryptography forum (reddit.com/r/crypto) has regulars that include people writing the TLS specifications and other well known experts. They’re hanging around because the forum is high quality, and I’m able to keep quality high because I can tell who’s talking bullshit and who knows their stuff.