Can you explain how security is maintained when the internet connection is compromised (e.g., MITM)?

Options
wsjndr
wsjndr
Community Member

Can someone at Agilebits describe some of the safeguards that kick-in when I can't trust my internet connection? I live in China, and I've experienced a handful of instances when there's definitely some kind of MITM attack going on. There have been reported cases of pretty much everyone along the connection chain (Great Firewall, ISPs, "secure" wifi at hotels and companies) mucking around with traffic in ways that they shouldn't. Still, even under assault, it's sometimes necessary to be online.

So, for example, what happens with the process of updating the offline cache (it's not "sync", right?) when I can trust the data that's being exchanged? Does that process get suspended? And will I still be able to use my slightly out-of-date offline cached vault to authenticate to sites and services? If authentication is happening over a compromised connection, is there any way to flag that activity so that I can update my credentials when I later have a secure connection? And are there any other cool 1Password safeguards that you'd like to brag about?


1Password Version: 6.1
Extension Version: Not Provided
OS Version: OS X 10.11.3
Sync Type: wi-fi

Comments

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    Options

    Excellent questions, @wsjndr!

    We designed 1Password for Families with the expectation that TLS can be subverted.

    Strictest forms of TLS

    But first let me talk about what we have done to prevent the kinds of attacks you described against TLS. We've stuck to the strongest forms of TLS, requiring TLSv1.2, using HSTS to thwart SSL striping attacks, excluding weak ciphers to avoid downgrade attacks, etc. Your Families domain insists that you talk to it using the highest TLS standards and will accept nothing less.

    One consequence of this is that you might find that you simply can't connect from some networks that are engaging in Man in the Middle MitM attacks on TLS, even if their intent is benign. Some organizations run MitM attacks to filter for malware or other such things. But from a technological point of view, we can't distinguish between malign and benign MitM.

    Additional layers

    But as you point out, we don't want to rely entirely on TLS, and so we have two other layers in the process.

    Data at rest encryption

    The most important layer is that your data is still encrypted with your keys. We never hold unencrypted 1Password data, and so we can't send you unencrypted data. Likewise, all your data is encrypted before you send it to us, and so you never send us unencrypted data.

    I should point out that all of this encryption uses Authenticated Encryption, meaning that your data is tamper-proof.

    Transport session key

    When your 1Password client logs into our server it uses a protocol, SRP, that among other desirable properties will create an encryption key that can be used for that session. We do use that key to add authenticated encryption to the payload of what transmitted in during the session.

    So again, even if someone were able to read your TLS traffic, these other two layers keep things very safe.

    Umbrella Bear

    Off-line usage

    As you've seen, we've got cryptographic measures (authenticated encryption) to protect against data tampering. And the 1Password applications are happy to work even if the data that they have is out of date. Obviously we try to make sure that we can keep that data in a consistent state and to cope with situations in which it may be corrupted. I can't promise that 1Password will always work as expected when data is corrupted, but we have built it with all of the questions that you raise in mind.

  • wsjndr
    wsjndr
    Community Member
    Options

    Okay, so assuming TLS isn't compromised (e.g., malicious browser extension, zero-day browser vulnerability), my vault should be safe even if my internet connectivity isn't trusted.

    So, is it a fair assumption that any authentication to websites/services made over an untrusted internet connection might be compromised? For example, say I need to log in to https://login.blahblah.com and my browser warns me with a certificate error. I still need to log in, so I use 1Password to help me with the sign-in process. My vault is still safe, but I can probably assume that my password is now compromised.

    Therefore, my question/feature request is this: Would it be possible for 1Password to recognize that my connection is suspect and then to flag that login item in the Security Audit section? Or maybe there could be an option to manually set that flag when using 1Password to sign in. (Say there's an "Insecure login" button provided by 1Password.)

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    Options

    Yes, your 1Password vault will be safe as long as you are using a bona fide 1Password client on a computer that isn't itself compromised.

    Would it be possible for 1Password to recognize that my connection is suspect and then to flag that login item in the Security Audit section?

    We do this for one specific case. If you have a log in saved for an HTTPS site and try to fill it into an HTTP site, 1Password will warn you. But in general, we won't be able to do as good a job as your browser can do in warning you about TLS troubles. This is because our browser extension doesn't have access to all of the low level connection details that your browser does.

    A rant about why browser security warnings are ignored

    TLS and SSL is hard for websites to configure. And keeping it set correctly is even harder. So what happened is that most browser warnings about various sorts of security related failures weren't due to malicious actors; instead they were just misconfiguration of the servers. Most people are not in a position to investigate the specific warning, test the network, seek other sources, etc to determine what the actual problem is. And so given that most of these were false-positives (there was no actual malicious attack), people learned to just click through the warnings.

    So we need to be stricter about getting websites to be configured correctly and not train people into believing that browser warnings can be readily ignored. But it is hard.

    Here is one example of things that have changed over the years. It used to be considered okay to up the static images of HTTP while the more sensitive stuff is served over HTTPS is no longer considered acceptable. Similarly, having login pages that were HTTP that sent to form action to HTTPS was also a frequently advised optimization. (It is more expensive to serve things via HTTPS and via HTTP, so lots of sites sought various ways to optimize). Those examples show that things change beyond merely what versions and cipher suits are acceptable.

    Now browsers may warn of those things ("mixed content", etc) because they do pose dangers. But this just makes the older bad habits of some website developers (BTW, I am guilty of both of those bad habits back in the 1990s) not only present the danger of the habits directly, but also reduce the effectiveness of warnings.

    Over the past yeas, browser developers have started to look for ways to make the warnings both more helpful and more effective. Take a look at Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness for an excellent example of some of this research1. It is from a few years ago, and browsers have improved their warning behaviors since then.

    It is difficult to make the warnings useful. For example, the danger of mixed content is not the same as the danger from an expired certificate, and neither is the same as the danger of a mismatched name, a self-signed certificate, or a broken trust chain. Because the dangers are different, users may reasonably make different choices in those cases, but we shouldn't expect users to understand those distinctions.

    Anyway, it does seem that as browsers get stricter about what they will connect to, web site developers are getting better at getting things properly configured. And browsers are getting better at providing warnings that work. So this situation does seem to be improving.


    1. I have one running argument with Devdatte about tolerable levels of false positives. Consider a comparison with smoke alarms. The false positives (alarm goes off when I'm cooking) are really annoying, but I still always respond to a smoke alarm going off because I can immediately determine whether it is a false positive or not. But when we get a false positive about SSL/TLS most people cannot determine whether it is a true or false positive. So a high false positive rate is more than just an annoyance. It trains people to ignore the warnings. ↩︎

  • MrC
    MrC
    Volunteer Moderator
    edited March 2016
    Options

    @jpgoldberg ,

    I like this rant. It gets to the heart of the matter, and reveals the span of the vast canyon that separates developers and users.

    Warnings and diagnostics are almost always written for the developer {him,her}self, as a means to catch coding errors, misunderstandings, oversights, etc. Often they are simply last ditch punts. And they are typically sufficient, given their actual audience, the developer. They are written from one side of the canyon.

    On the other side, (ordinary) users do not have the necessary background, skill in the art, or comprehensive gestalt to make any sense of what is being communicated. Like infants not having the mental capacity yet to make any meaning from the words "Look out for that car!", users simply hear meaningless noise. No (reasonable) parent would simple yell such warnings, and assume they were sufficient to protect their child. No, they would (should) race to remove the dangerously crawling infant from harm's way.

    Avoiding the crisis is of course the best course - prevent such a situation from occurring in the first place. And yes, that requires extra, almost heroic effort on the part of the parent, the one who does have the comprehension. That is the burden of being a parent.

    Developers need to bridge the canyon, cross it, and own the responsibility of prevention. That is the burden of being a developer.

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    Options

    I don't just want to blame the designers of the warnings, @MrC. There is a lot of blame to go with those who misconfigured systems and expected people to just click through the warnings.

    I should also confess to being on the "wrong side" of this 20 years ago. (Though it's not like I recall many voices on the "right side".) I believed then, as I do now, that most people are smart enough to understand the subtle distinctions and possible causes for different sorts of warnings about. What I was wrong about was that most people don't share my curiosity about those subtleties and various interactions. They are smart enough to learn if they wish to put in the effort, but most people have better things to do with their time.

    Here is one thing that I was very much on the wrong side of. It was the "null" option for encryption over SSL. I felt, as I do now, that there would be cases where people visiting a website would need to know that its identity was authenticated, but nothing needs to be kept secret. An example is when you download a copy of 1Password. You need to know that you are really getting it from us, but there is no reason to actually encrypt the data as it is being downloaded. So I argued for the ability for a client and a server to negotiate "null" as the encryption algorithm. (I was helping to run a web server at the time and delivering encrypted content was much more expensive for us than delivering unencrypted content.)

    Well, even though I had very good reasons for arguing as I did, I (and those who sided with me) were catastrophically wrong about two things. We simply hadn't considered the scope of downgrade attacks (after all, we were discussing what was to become the first version of SSL), and we hadn't thought about what users expect the "secure lock" icon would mean. Just because all of us in the discussion at the time knew that the question "is X secure or insecure" is an incoherent question (the question is "is X secure against Y"), we simply didn't realize that people do think in those terms, and most are not willing to be given lectures on the different types of security.

    Now, of course, I know that if you present something to people as "secure" then it better actually be secure in all the ways that people might reasonably imagine it to be. This is a very challenging thing to do, but it is something that we are always thinking about.

This discussion has been closed.