Security issue: Potential Man-in-the-Middle attack on end-to-end encryption

edited July 2018 in Lounge

Currently 1Password offer no end-to-end authentication between devices where 1Password is installed. The obvious risk is that Agilebits (or someone who compromised Agilebits or its infrastructure) can have access to all data contained in vaults by simply doing a good old man-in-the-middle attack that will go unnoticed by the user.

This has been discussed and confirmed here:

Agilebits said they would address this:

This security limitation means that 1Password is not fit for corporate environments where sensitive passwords/data are shared between devices.

Other competitors like RememBear (and I think also Bitwarden) have addressed this specific issue.
I'm starting this topic to discuss this issue openly.

Chime if you think this is important!

1Password Version: all
Extension Version: Not Provided
OS Version: Not Provided
Sync Type: Not Provided


  • jpgoldbergjpgoldberg Agile Customer Care

    Team Member

    Hello again @raphaelrobert!

    Thank you for bringing this discussion here. Twitter is great, but it is easier to have more people talking with each other here.

    I am going to change the title of this thread from "no end-to-end encryption" to "potential MITM". I know that we disagree about the terminology here, but your data is end-to-end encrypted. It's just not provably so given the potential for a MitM.

  • Hey @jpgoldberg,

    I don’t disagree with your title at all, but my original title was about “end-to-end authentication”, not “end-to-end encryption”.
    I don’t doubt the encryption, I just point to the lack of authentication.

    Anyway, whatever gets the discussion going works for me!

  • jpgoldbergjpgoldberg Agile Customer Care

    Team Member
    edited July 2018

    The problem

    For those who don't see the issue, here is the start of a draft of a section on this (PDF) for the white paper.

    Why is this a problem?

    Our goal has always been to design 1Password so that is it strongly resistant to insider attacks. We are not evil, and we have no plans to become evil. But the reason that we build to defend against insider attacks is that if we can do that then we automatically keep your data safe in case we are compromised. Sure we like it that people trust us and 1Password, but a secure system should involve verifiable trust.

    Our traditional answer

    Our answer to the two people who have asked about this over the years has been

    well, you can run 1Password in a debugger to see what public keys you are using.

    That isn't as terrible an answer as it might seem. We expect and hope that there are already people running 1Password in a debugger or reverse engineering the clients to ensure that it is behaving as it should. So a few people checking this way can at least make sure that we aren't systematically engaging in MitM attacks.

    The difficulty with this answer (beyond the obvious) is that the kind of MitM described could be very finely targeted instead of some misbehavior that would appear in all clients. So if Mr Talk (read the PDF linked to above) performed his attack on Molly and no one else, then only Patty and Molly would be capable of discovering the attack through running in a debugger and checking keys. Anyone else checking things out would never be able to detect that attack on Molly.

    Why we haven't built a "solution"

    (To be added later or in a follow up post)

    (Stalled) Progress since December 2017

    [To be edited later, as I don't have time now to finish this post]

    It got stalled for a number of reasons is that we haven't settled on how we are going to approach this. We have a couple of undocumented features in the command-line interface for 1Password to inspect public keys and their fingerprints. They are undocumented for a reason, as we may very well need to change them. We haven't even settled on a fingerprint scheme.

    So this is very much subject to change, but currently to get the fingerprint of any team member's public key

    $ op get user [email protected][REDACTED] --fingerprint
    447c0d 577dfa e4084d 318d97 9a29c9 625dff 7baf8c ef7bf8

    Or to see the full public key

    $ op get user [email protected][REDACTED] --publickey | jq
      "pubKey": {
        "alg": "RSA-OAEP-256",
        "ext": true,
        "e": "AQAB",
        "n": "rmWExBMsihygUDM3vI_GLMALfnb8g_39__X2-1bMbXPPyV7jj49h66jsEiaH4OyGWRCGMRuMFT2yiYf9kKYN6IKOuSLvP_2lk9sR2BbvQmaffxoGnnJ28AyobPFhMbERu_VI9LqmTshj_4IJj3J-O50_3fh5nfvZiPMmHd6Ic8jPxYaHjXNbqOIhVCEeNkbuPn1UJ9pw1Ca9Ce09-HzsHuRdx0drzUHMDckarEitkn60HJsbf1N1wLiEAspe_YM4-xFH6oCScZ1AAXYNo6Dtpuwn9EfZ4xRHT3BFDHpRItZMSalwfqh7GU7oou8xIDRWXY3YVCwF0JWzNaY1Nr-EBQ",
        "key_ops": [
        "kty": "RSA",
        "kid": "5vl5b5rrmjgtc7b2xkaorjuvrm"
      "fingerprint": "447c0d 577dfa e4084d 318d97 9a29c9 625dff 7baf8c ef7bf8"
  • jpgoldbergjpgoldberg Agile Customer Care

    Team Member

    Just an update on some of our thinking about this. The fingerprint mechanism described above (and partially implemented in the CLI) only provides a fingerprint for the RSA-OEAP key, which is used for encryption. But when you create your 1Password account, you also generate an ECDSA key (which can be used for signing), and that should be covered in any robust fingerprint scheme as well.

    So chances are, we will be looking at updating our finger print computation to include the ECDSA key as well. How this will all look, even internally, is still undecided. And how this gets exposed to users is an even harder question. But I did want to let you know what we've been thinking about.

  • Feel free to move this post if you feel it's off topic.

    As I see it, potentially the larger security issue concerning an adversary with access to 1Password's infrastructure is that a large number of administrative tasks can only be done from the website.

    Consider the following scenario. An attacker has compromised 1Password's infrastructure. They download an (encrypted) copy of all the team's data, which they currently cannot access. The next time a recovery user logs in to perform some administrative function, such as recovering an account, they get the recovery user's passphrase and secret, allowing the attacker access to the recovery vault. Note that although a user's secret/passphrase is never submitted to the 1Password servers during the login process, there is little to stop an adversary from injecting javascript capable of stealing it into the 1Password website login page.

    If they do these two things, they now have full access to all of the team's data.

    Therefore, it might be worth creating a native, non-website client for performing administrative functions. Of course, an adversary can always attempt to push out a malicious update to native clients. However, there are several security benefits to a native client. Probably the most important is that it is much easier to secure the private keys used to sign software releases than it is to secure the private keys used to sign the website's javascript. The website private keys need to be on every web and caching server you and your third party cloud providers use.

  • BenBen AWS Team

    Team Member

    Hi @codasalt

    You make very valid points. We’ve touched on some of this briefly in the 1Password Security Design White Paper in the Appendix A: Beware of the Leopard section. We do wish to improve this situation. @jpgoldberg may be able to elaborate on some of our thoughts here.


  • I’m just afraid to add to this thread, impact to individuals, or hijacked servers that generate a GUI, account creation, etc. I know it’s a never ending game, risk assessment, and finite resources, hence, my mouth is shut... except this!

    typo in the Mr. Talk call calllout in the referenced PDF, second column. 1792 instead of 1729.

  • Regarding public keys specifically, what about a system design similar to the following?

    Sign each user public key with the key of either a recovery user or account creation user at the time of account recovery or user creation. This creates a signature chain going from ordinary users to the user who created / recovered them, to whoever created /recovered that user, eventually all the way back to the original user. The clients could cache the original user's public at the time of original account setup. This still has the possibility of active MiTM, but it requires the adversary to have achieved compromise at the time of account creation / recovery for the specific user being targeted, which greatly decreases the attack surface. More importantly, it requires no additional actions on the part of the normal users like having to verify key fingerprints. More security conscious organizations could distribute this key out of band to clients via a QR code or using administrative access to their machines if they have it.

    Handling key revocation might be a little tricky and might involve extra work on the part of administrator or recovery users, but I imagine this could be done in a reasonable way without requiring normal users to do anything extra.

  • jpgoldbergjpgoldberg Agile Customer Care

    Team Member

    @codasalt is absolutely correct that the most damaging form of this attack would be on the public keys used for the recovery group. And this difficult problem also interacts with revocation, which is not done cryptographically.

    There are several substantial difficulties to addressing all of this (which is why it hasn't all been addressed)

    1. Getting people to verify an individual's key will be hard enough. Getting some chain of trust for the recovery group keys is going to be much much harder.
    2. Our original plans were for a cryptographic-based key revocation mechanism, but it is just harder to get there from here in practice than we originally thought. Given that we haven't succeeded in that effort over the past nearly three years, I don't want to have to rely on it for dealing with the MitM issue.

    So I think that just for practical reasons, we should be looking at relatively simple things we can do to reduce the opportunity for such attacks. That is, we should increase the likelihood that such an attack would be detected, even if we can't fully defend against it. Ultimately, we would want a full solution, but in practice, we have to take this incrementally.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file