1Password and the Crypto Wars

Eckius
Eckius
Community Member
edited September 2013 in Lounge

The Guardian, this morning:

"US and British intelligence agencies have successfully cracked much of the online encryption relied upon by hundreds of millions of people to protect the privacy of their personal data, online transactions and emails, according to top-secret documents revealed by former contractor Edward Snowden.

The files show that the National Security Agency and its UK counterpart GCHQ have broadly compromised the guarantees thatinternet companies have given consumers to reassure them that their communications, online banking and medical records would be indecipherable to criminals or governments.

The agencies, the documents reveal, have adopted a battery of methods in their systematic and ongoing assault on what they see as one of the biggest threats to their ability to access huge swathes of internet traffic – "the use of ubiquitous encryption across the internet".

Those methods include covert measures to ensure NSA control over setting of international encryption standards, the use of supercomputers to break encryption with "brute force", and – the most closely guarded secret of all – collaboration with technology companies and internet service providers themselves.

Through these covert partnerships, the agencies have inserted secret vulnerabilities – known as backdoors or trapdoors – into commercial encryption software."

How can I be sure 1Password / agilebits is not one of these companies? Is there still someone or something that can really be trusted nowadays?

Comments

  • khad
    khad
    1Password Alumni

    From that same article:

    The agencies have not yet cracked all encryption technologies, however, the documents suggest. Snowden appeared to confirm this during a live Q&A with Guardian readers in June. "Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on," he said before warning that NSA can frequently find ways around it as a result of weak security on the computers at either end of the communication.

    To begin with, I would point out that 1Password is a password manager that is an encryption app not a hosted service. Your data is not stored on our servers. Your data is yours. From our recent “On the NSA, PRISM, and what it means for your 1Password data” blog post:

    We don’t know who you are, but we love you

    We’ve never been asked to turn over data about you. Sure, some of that is because we are a Canadian company, but most importantly is the simple fact that we really don’t have any data to turn over. The easiest way for us to protect your data and data about you is to not have that data in the first place. We can’t reveal or abuse data that we don’t have. You can read the details of the data we do and don’t have.

    In summary, we only have information about you that you explicitly provide to us. If you sign up for our Newsletter, we will have your email address. If you purchase from our store directly, then we have the information you provided at time of purchase (though we only retain partial credit card details). If you contact use through support, we have a record of those communications. If you make your purchase of 1Password through Apple’s app stores, we are only given aggregate information (how many people from which countries).

    We do not have your 1Password data. We do not know your 1Password Master Password, We don’t even know if you use 1Password. We do not know how many items you have in your data or their type. Our image server (used for Rich Icons in 1Password 4) is set up in a way that we never see the IP addresses of individual requests. That server never gives us information about what is in any individual’s 1Password data.

    Quite simply, you don’t have to be concerned about AgileBits gathering information about you. We just don’t have much information in the first place.

    I encourage you to read the rest of that blog post before I go on, but your question goes beyond that. Could we be compelled to change our system to deliberately weaken it? Is it likely that we ever would be "asked" to?

    Until recently, we thought that providers of true end-to-end encryption would be immune to such "requests/orders/compulsion". It's only if you had the capacity to intercept data, passwords, etc., that you could be compelled to do that interception (or allow others to intercept). However, the story of Lavabit terrifies us. Of course we don't know what happened there (anyone who does know isn't allowed to say), but Lavabit was designed to provide end-to-end encryption. (Though it did have access to the encrypted data, unlike 1Password.)

    So prior to Lavabit, we considered the possibility of compulsion so remote that we never worried about it. We still think that it is a small chance, given how 1Password operates. But it is significant enough to be worth some serious thought.

    So here are a few things to keep in mind:

    1. We have developers in four different countries. (CA, US, UK, NL). It would be difficult to gag all of us.
    2. Lavabit has set a precedent in how to respond. I like to think that we would take the legal and financial consequences of refusing to comply, but of course that is an easy thing to say now. Nobody really knows what kind of pressure governments could put on us or how we would personally respond.
    3. We are very open about our data design and security architecture. That should make it harder to deliberately weaken it without detection.
    4. Password managers are not, in general, communication tools. Perhaps that would make us of less interest.
    5. If the NSA/FBI/TLA is seriously after a particular 1Password user it would probably be easier (and less likely to be detected) to attack the target's operating system than to force us to change 1Password's design. That is, it is easier to go around 1Password instead of through it.

    Still, we remain cautiously optimistic that we will never be confronted with such a request, largely because of increased public awareness. The risks of the TLAs getting caught doing something like that and there being a public outcry is very substantial. They lost the Crypto Wars back in the 90s. They are not off to a good start in Crypto Wars II.

    So could they compel us to sabotage our product and cheat our customers? Not with out a very high risk to that becoming public. Would they try it? I still don't think so.

    Obviously, we can't entirely rule out such a scenario, but there are things that make it less likely. Because we have provided public details of our data format and encryption, you or anyone with the appropriate skills can check to see whether what 1Password is producing conforms to the design specifications that we've published.

    That doesn't rule out all possible mischief. But it does limit where tampering would have to be. And so it makes it easier for someone on the inside to detect.

    Nothing I say or anything we do can give you absolute certainty. We can't completely rule out every avenue. But what I hope I have offered are a number of independently verifiable things, each of which makes it less likely for 1Password to be tampered with.

    That is, if the difficulty of tampering with 1Password and getting away with it is sufficiently high, then they would attack a weaker part of your security.

  • zexpe
    zexpe
    Community Member

    I think the bigger problem is that the password you send from 1Password to the website you wish to log into over SSL is comprised.

    http://www.theregister.co.uk/2013/09/05/nsa_gchq_ssl_reports/

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    I'm working on a blog article about all of this, so I will have to say that I'll get back to this discussion here later.

    For me, the most disturbing thing about this is that the NSA have actually been weakening crypto instead of merely exploiting weaknesses. This is just so deeply wrong that I am at a loss for words. One of the best write-ups of this from Matthew Green, http://blog.cryptographyengineering.com/2013/09/on-nsa.html

  • Eckius
    Eckius
    Community Member

    Thanks, Khad, for taking the time to answer my question in such a detailed way.
    And let me make immediately clear that in my opinion nobody has the slightest reason to doubt of the quality of agilebits’s products, and of the honest intentions with which they are designed. Personally, I’m a very satisfied user of 1Password, and I’m looking forward to buy the next version as soon as it will be available.
    I asked what I asked only because I think disclosures like those on The Guardian this morning are rather worrying, and need to be discussed on a forum dedicated to an application like 1Password, which finds its very right of existence only in its capacity to keep secret what its users don’t want to share with others.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    Thank you @Eckius, but there is no need to explain. In a world where it is known that the NSA has been working with "industry partners" to weaken cryptographic systems, it is absolutely proper that you wonder whether we have become one of those "industry partners". Don't worry about hurting our feelings. Everyone needs to be able to ask these sorts of questions clearly and directly. (It also gives us the opportunity to answer.)

  • tzs
    tzs
    Community Member

    That was a good blog entry. I liked the point about the company having people in 4 countries, making it quite hard for a gag order to silence all of them.

    However, there's a possible hole in that--if some country ordered the AgileBits office in that country to do something, such as put a backdoor or weakness into the next 1Password update, and put a gag order on that office, would the AgileBits offices in the other countries be able to figure out that this had happened?

    Do all the offices have access to the source code and the ability and time to independently build each new release and verify that what is going up on the download site matches what they built (and do they have authority to, if there is a mismatch, to make a public statement that people should not download the update until an explanation is offered)?

  • bac
    bac
    Community Member

    Thanks Jeff for the blog post. There are two important details you didn't address regarding potential vulnerabilities. One is that the underlying cryptographic algorithms you use may be compromised. I assume you're using industry standard algorithms and implementations for your crypto so the encrypted files may not be as secure as one would hope. You may refuse to put in a back door but one may already exist via the definition of the algorithms. That said, it sounds like symmetric algorithms are still safer.

    The other issue you didn't address is the vulnerability of your sharing partners. Sure Agile Bits never gets the encrypted file but to use the product cross platform it has to go into the cloud. Dropbox and Apple are both listed as cooperating with the Prism program.

    While your post is true with respect to your company's participation, the product itself may be more exposed than you state.

  • fahlman
    fahlman
    Community Member

    @bac The vulnerability of Agile Bits' sharing partners is irrelevant because the file is encrypted before being sent to Apple or Dropbox.

  • bac
    bac
    Community Member

    But @fahlman we are discussing the vulnerabilities of the underlying cryptography. NSA manipulates crypto standards, NSA has access to your encrypted files via Prism cooperating companies, NSA has your data.

    It is all hypothetical right now because we don't know which algorithms are compromised. The full usage cycle of 1Password should be addressed.

  • BenAlabaster
    BenAlabaster
    Community Member

    Where is your source control hosted? Outside the U.S.? What is your source control system? Would it be possible to have it managed only by someone overseas and only allow code into your central source control system by pull request by an off-shore member of your team that has a strong background in security systems? That way at least any code that gets in would be required to have off-shore eyeballs on it before it makes it into your production code-base. That would go a long way to making sure nothing gets snuck in by a legally compelled member of staff.

  • poyntesm
    poyntesm
    Community Member

    Do AgileBits have access to source code for any of the standard crypto libraries you guys rely on? I presume you use MS-CAPI for 1P on Windows and some similar ones for 1P on OSX/iOS.

    Would this encourage you more to consider a 1P for Linux product where you have the option where you can review the source code yourselves and have more choice like NaCI or openSSL?

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    edited September 2013

    In this comment, I'll talk about the "what if the crypto algorithms or APIs we use have been compromised?" queries. I'm not ignoring the other ones, but I'm trying to catch my breath. Also this is the question that I've been taking the closest look at.

    We need to separate multiple questions.

    1. Could the algorithms themselves be weakened?
    2. Could the implementations, APIs, etc. of them be weakened?

    I'm going to first look at these for everything we use, with the exception of the CSPRNGs, which I'll talk about later.

    What cryptographic utilities do we call?

    • Primitives

      • AES
      • SHA1, SHA2
    • API provided constructions

      • CBC mode encryption
      • HMAC
      • PBKDF2
    • Our own constructions

      • AEAD (Encrypt-then-MAC)
    • Random numbers (which I will leave aside for the moment)

      • SecGetRandomBytes (OS X and iOS)
      • CryptGenRandom (Windows)

    How all of these are used is described fully in http://learn.agilebits.com/1Password4/Security/keychain-design.html

    Everything is a function

    Everything above (except for the random numbers) is a function in the mathematical sense. Given a key and a block of data AES(key, block) will always produce the same output with that same key and block of data. Given some fixed data and a key, HMAC-SHA-X(key, data) will always give the the same output.

    If these functions were to give "incorrect" output they would simply fail to work at all. Things would very visibly break extremely quickly. Web browsers and servers wouldn't be able to talk to each other, data encrypted on one system couldn't be decrypted on another. Cats and dogs would live together.

    So we know that the output of all of these is as designed. There simply can be no tampering of these to give bogus bits.

    But could there still be deliberately bad implementations of those?

    We certainly know that there are accidentally bad implementations of those. Pretty much everyone who reads the standards and specs and attempts to write their own implementation will get things wrong even if they always get the right output from the write input. So clearly there can be deliberately broken implementations that still produce the right output.

    You might be asking how an implementation of AES which always gets the right output could still be weak. The answer is with respect to side channels. Does the implementation leave sensitive data in memory? Does it take less time with certain kinds of keys or data, does it consume more power with certain kinds of keys or data? Those sorts of things can be used by an attacker to learn things about secret keys and data from very subtle measurements of the system while it is performing cryptographic operations.

    We don't have to worry about those. If an attacker is deep enough into your system to be able to watch things that closely then they already of more than enough access to obtain the keys directly. Side channel attacks are relevant for crypto devices with embedded secrets (like smart cards) or networked encryption operations (like an SSL server) which can be sent lots of data and timing of responses can be analyzed.

    So we have no reason to worry about deliberately weakened implementations of these cryptographic functions affecting 1Password.

    Could the algorithms, standards, be deliberately weakened?

    Backdooring of Dual_EC_DRBG

    We know now that the NSA has deliberately weakened at least one standard, Dual_EC_DRBG; could they have done this with AES or SHA2? To answer this we need to look at what happened with Dual_EC_DRBG.

    The backdooring of Dual_EC_DRBG is absolutely fascinating mathematically. When I explained it to my 14 year old daughter, she decided that she wanted to work for the NSA: "Yay! Evil mathematicians. Just like James Moriarty!"

    The possibility of such backdooring was discovered and published back in 2007.
    Furthermore, people knew that NIST had not fully justified some of the choices in that specification or adequately explained where they came from.

    So everyone knew that it would have been possible for the NSA to have back doored it. Even without that, they knew that even without a deliberate backdooring, there was a potential backdoor that could be discovered even if it hasn't been put there deliberately. These are among several reasons why pretty much nobody uses Dual_EC_DRBG. (There are others as well, in that there are much faster and easier to implement alternatives.)

    Is AES different

    AES is different from Dual_EC_DRBG in many crucial respects.

    1. AES has been studied by enormously more people and far harder and far longer than Dual_EC_DRBG.

      The possibility of a problem with Dual_EC_DRBG was discovered almost immediately after it was proposed. AES, on the other hand, has been intensively scrutinized for 15 years. The mathematics behind AES is also more accessible. To understand Dual_EC_DRBG, you need to know all of the math behind AES and then quite a bit more.

    2. The math is different.

      The math of behind AES doesn't allow for (that kind of) a backdoor. There are some arbitrary choices in the design of AES. In particular, which particular primitive polynomials are to be used for defining certain group theoretic operations. However those operations aren't such that their security could depend on knowing some secret about the chosen polynomials. Because a different kind of math is being done here, the scope for malicious tinkering and hidden secrets is far smaller.

      Likewise with some of the choices of the constants of S-boxes. The design of AES doesn't leave a lot of opportunity to undetectable mischief.

      So mathematically, the NSA would not have had the same kind of opportunities to backdoor AES as it had with Dual_EC_DRBG.

    3. The process was different

      The process that resulted in the establishment of AES was far far more open and accountable. Demonstrations were required that the "arbitrary" parts of AES were selected in manipulation-proof manners. The world was watching, and NIST knew that if even the shadow of a whiff of a hint of impropriety were spotted, the process would be a failure.

      Indeed, I suspect that NIST will no longer be a player in the development of cryptographic standards. Even though they have behaved extremely well in cases (e.g., AES) the fact that they can be manipulated by the NSA into behaving badly dooms their future role.

    4. AES standard is "simple"

      There is reason to believe that one way that the NSA has sabotaged standards (in this case IPsec) is to just make the whole thing so messy that it is unworkable. By comparison the AES was a very different process. [Update: after review of emails from 1995; I am skeptical of much of Gilmore's interpretation of events.]

    5. The NSA wouldn't do something like that to AES

      Ha! (but it really isn't funny)

      A week ago I would have pointed out that part of the NSA's job is to protect American citizens, businesses, and government from "enemy" cryptographic attacks, and so would never, ever try to actually sabotage the cryptographic standards.

      I would have described the story of DES, where the NSA tinkered with the standard (without explaining why) and a decade later it was discovered that their tinkering actually made DES stronger.

      That was then. This is now.

    Ultimately, I remain confident in AES. I also think many alternatives to AES haven't held up as well to scrutiny over the past decade. (Please don't recommend Twofish. We all love Bruce Schneier, but that doesn't make his algorithms magic.)

    AES and SHA2 are not the concern

    If there is sabotage of the cryptographic tools we use, it won't be in the AES. It's the CSPRNGs where difficult to detect tampering could occur.

    We actually already take precautions against that, and we are looking at expanding those. But that will have to be the subject of a different comment.

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    edited April 2014

    If we are to worry about malicious crypto utilities in OS X, iOS, and Windows, it would be with the (alleged) Cryptographically Secure Pseudo-Random Number Generators (CSPRNG). That would be the place where a difficult to detect weakening could occur.

    I never finished my follow-up article on Alan Turing and Randomness, so I don't have something I can point you to about what sorts of properties are needed for a CSPRNG and how these are achieved. So some of what I say here may not make sense.

    There are two things that we generally want from a CSPRNG. That the data it produces be "unbiased" and that the bits that it outputs be "unpredictable" from previous output. (These actually both follow from the same mathematical property, and I am playing very fast and loose with all of this here. I really am describing this wrong, but it is wrong in a way that helps me get to the right place. Don't take this as a reliable lecture on RNGs.)

    How does 1Password use CSPRNGs

    In the simple case, when 1Password needs to create an encryption key, it calls the system provided CSPRNG for a random number of use as a key. On Mac and iOS that is to SecRandomCopyBytes() and on Windows that is CryptGenRandom(). So when various items are encrypted with AES keys in 1Password, the keys are picked at random this way.

    We take extra steps when it comes to the master keys (the keys that are protected by your Master Password and are used to encrypt other keys used within 1Password). But I will leave those extra precautions until I've explained a bit about what we expect from a cryptographically secure random number generator.

    Unbiased

    If we ask a CSPRNG for 32 bytes (256 bits) we want to know that the output we get could be any 256 bit number of equal probability. We don't want any outputs to be any more likely than other outputs in the range.

    Let's how a CSPRNG could be weakened to not do this. Suppose when you ask for 128 bits, the bogus CSPRNG first does a good job at generating 40 bits and then hashes that 40 bits with HMAC-SHA1 with a secret and then gives you the first 128 bits of that.

    You think you have a good, strong 128 bit key. But an attacker who knows the built in secret only has to work through 2^40 possibilities instead of all 2^128 possibilities. Working through the former is easy for the NSA, working through the latter is not feasible.

    This way only a small portion, 2^-88, of all 2^128 is actually possible. Most outcomes wouldn't be possible. Normal statistical analysis of the outputs wouldn't reveal the bias (as long as SHA1 does its job).

    (Note the attack wouldn't be implemented as described, but it would be something with a similar effect.)

    But in this case, with only 40 bits of real entropy for every 128 bits of output, it would be possible, with effort, to detect the problem. If someone were to just ask for enough output and keep track of what has already come out, they would see "collisions". With 40 bits of real entropy, after just asking for a million keys, there is a 50% chance that two keys would be the same. But finding such a collision in a million items with with 128 bits of entropy virtually impossible.

    The ease of detection (through collisions) puts a upper on the actual entropy reduction an attacker would get away with. I haven't looked into this, but I just don't think that anything under 85 bits of real entropy per 128 bits could go undetected. (I'm sure there is research on this, but I haven't looked for at it).

    Predicting a stream

    In some uses of a CSPRNG, a whole long series of bits it output in a stream. You don't want it to be possible for someone who has seen just a portion of the stream to be able to make a better than chance guess on any of the subsequent bits. (What the NSA did with with Dual_EC_DRBG is give themselves a way to make exactly those sorts of predictions when people use that RNG for a stream.)

    Now because of how 1Password uses CSPRNGs, we don't have to worry about these sorts of attacks. We only are getting small chunks of random data at a time.

    Extra steps with the master keys

    When you create a new vault with 1Password, four keys are created (I'm describing the 1Password 4 Cloud Keychain Format but the same concept applies to the Agile Keychain; just the specific numbers of keys and bits differ.). These are the master AES key, the master MAC key, the overview AES key and the overview MAC key. Each key is 256 bits.

    Instead of just asking for 512 bits for each pair (AES, MAC) of keys, we actually get 256 bytes (there are 8 bits in a byte). And then use SHA-512 to extract 512 bits from that 256 bytes. That is, we got four times as much data from the CSPRNG then we actually need. By using SHA-512 we still make the keys depend on all of those. We get the full 256 bits of entropy for each key even if there is a hidden bias in the CSPRNG.

    By the way, it is this 256 byte block of data which is encrypted with a key derived, via PBKDF2, from your Master Password.

    Accident or malice

    We introducing this mechanism back with the Agile Keychain, not so much because we were concerned about maliciously designed or implemented CSPRNGs, but because we know that CSPRNGs are really really hard to get right. Lots of operating systems have had buggy ones. So I'm not claiming that we were particularly foresightful about deliberate attacks on the CSPRNG in this design. Instead, the precaution was taken against potential accidental errors in the CSPRNG implementation. But often, defending against one is the same as defending against the other.

    What now?

    We are not going to jump to make any sudden changes. As Adam Caudill points panicking people tend to make bad security decisions. [Please note that his AES-512 thing is a joke. It is an example of a really bad idea that some panicked people might reach for.]

    It will take time to review the independent analyses of these CSPRNGs and look at what the expert cryptographic community has to say about them. We may wish to extend the precautions that we take to all keys and not just the master and overview keys. We might find it wise to implement different, additional, safety measures in how we get our random numbers. Or we might conclude that what we have is more than sufficient.

    At the moment the cryptographic community is still dealing with their anger. (Or maybe I am just projecting.) It will take time to evaluate and develop reasoned recommendations.

  • bac
    bac
    Community Member

    Jeff thanks for these replies. I really appreciate the amount of detail you went into to address the concerns raised.

  • lhotka
    lhotka
    Community Member

    http://www.propublica.org/documents/item/785571-itlbul2013-09-supplemental#document/p2

    NIST now recommends that folks stop using Dual_EC_DRBG

    Do you know if the OSX functions you're calling use that under the hood?

  • khad
    khad
    1Password Alumni
    edited September 2013

    @lhotka, the short answer is: no. We do not use Dual_EC_DRBG.

    Just in case you missed Jeff's "Backdooring of Dual_EC_DRBG" section above:

    So everyone knew that it would have been possible for the NSA to have back doored it. Even without that, they knew that even without a deliberate backdooring, there was a potential backdoor that could be discovered even if it hasn't been put there deliberately. These are among several reasons why pretty much nobody uses Dual_EC_DRBG. (There are others as well, in that there are much faster and easier to implement alternatives.)

    It's long. He's been quite prolific lately. I don't blame you if you didn't read all of it. :)

  • lhotka
    lhotka
    Community Member

    :-) I did read that, was just wondering if apple used it under the covers. Sometimes their internals aren't widely understood.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    There really isn't a place in what we do where something like Duel EC DRBG plays a role. At no time do we need a deterministic random bit generator, and neither does anything we call.

    In principle something like this could be used underlying the CSPRNG in the system, but it would probably be the most cumbersome and inefficient way of tampering with the CSPRNG imaginable.

    Just an update, I've been studying up on the kinds of defenses available for a not quite CS PRNG. The defensive measures that we already have in place work reasonably well against the most likely form of attack, but don't address other things an attacker can do by influencing the PRNG because a hash construction like SHA-256 is missing one of the cryptographic properties desirable for key extraction. The likelihood of those sorts of attacks is pretty close to negligible, but I expect that in the coming months, we will be making some small, extra-defensive, additions to how we treat data coming from the PRNGs that come with the operating systems. Still, we might conclude that current defenses are sufficient. I'm just giving you a bit of an update based on what I've been looking into.

    At the moment, what I find most appealing is HKDF. It has both the advantage of being usable (adjustable) for a number of different kinds of situations, including ours; but it has the disadvantage of being adjustable for a number of different kinds of situations. So we have to ensure that we clarify our security assumptions and requirements before we figure out how to best use HKDF within our key derivation processes.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    The Lavabit case has been unsealed. Basically, in every respect it is as bad or worse than I'd feared. See http://www.wired.com/threatlevel/2013/10/lavabit_unsealed/

    I did, however, smile at that at one point, very late in the process,

    Levison complied the next day by turning over the private SSL keys as an 11 page printout in 4-point type. The government, not unreasonably, called the printout “illegible.”

  • benfdc
    benfdc
    Community Member

    I suppose we should take heart from the fact that the government had to resort to a subpoena to try to get hold of Lavabit’s SSL keys. One might infer that SSL does work (trust the math, as Schneier writes), and that the spooks had no backdoor means to swipe the keys.

  • benfdc
    benfdc
    Community Member

    The meaning of it all. I'm guessing that Jeff is a Type B.

  • AltmanSoftware
    AltmanSoftware
    Community Member

    I'd like to echo the previous comment of thanks for taking this subject seriously. After all is said and done, the only guarantee we have that 1Password doesn't have some form of backdoor is trust; trust that is earned.

    I've always appreciated the way agilebits does business and how you treat me as a customer; your technical openness and thoroughness speak volumes here.

    As you have said, many of these issues were previously beyond the scope of concern; not so true now. I came to this forum today to evaluate whether or not it was prudent to continue trusting a closed-source and pre-compiled password manager and I am heartened by the frank and honest discussion in this thread. Please continue examining these issues and discussing them openly; for my part, I am continuing to use and endorse 1Password,

    Thanks!

  • khad
    khad
    1Password Alumni

    Thank you, @AltmanSoftware. We wouldn't be able to work at our dream jobs without support from folks like you. We genuinely appreciate you taking the time to voice your support.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    Great comic, @benfdc!

    I really am none of those. I don't go to extraordinary measures to stop "them" from spying on me (as you suggest with me being in panel B). But I do go to extraordinary measures to stop "them" from spying on you. I want to live in a society in which people can easily keep their communications private if they wish to.

    For example, I have been aware of Lavabit for a while and of Silent Circle since its founding. But I don't actually use those services myself. But I very very much want them to succeed.

  • benfdc
    benfdc
    Community Member
    edited October 2013

    @jpgoldberg—

    True story you can probably relate to:

    I made a very high-tech OpenPGP key a few years ago. No real need for it—none of my correspondents use PGP—but I hadn’t really explored the program since the days of PGP 2.x and I was curious about how things had evolved over the years. As long as I was going to create a new key, I researched stuff so that I would “do it right.” But the subkeys to my super-sophisticated key have all expired and I haven’t taken the time to do anything about it.

  • Uno_Lavoz
    Uno_Lavoz
    Community Member

    @benfdc That's so awesomely dorky. :D

  • Griz
    Griz
    Community Member

    Recently, it seems like some of the big e-mail/social media players (e.g., Google, Twitter, etc.) have announced "end-to-end" encryption and other such blather, supposedly to assuage our concerns about the NSA tapping into all of our electronic communications. Sounds good on its face.

    Within the last few months, I think it was Jeff that posted something here to the effect of that on principle, AgileBits would never allow itself to be forced by the NSA or its ilk to compromise 1PW with a backdoor or the like. Besides, we were told, they are Canadian so they the US government has no sway over them.

    That too sounds good on its face.

    But it's hard to believe that the Feds would allow us mere mortals to take back the power the NSA has obtained, at least without much more of a fight. Would they really allow their pawns at Google, MS, Yahoo, Apple, AT&T, Verizon, etc. to thwart the US imperative to know all, thereby making us all so, so safe?

    Anyone remember PGP? It ostensibly provided a strong ability to keep the snoops out, but where did it go? Per the Wikipedia page:
    "On April 29, 2010 Symantec Corp. announced that it would acquire PGP for $300 million with the intent of integrating it into its Enterprise Security Group.[22] This acquisition was finalized and announced to the public on June 7, 2010. The source code of PGP Desktop 10 is available for peer review."

    At least in the consumer space, is anyone using PGP anymore, or was it allowed/forced to die a slow death?

    And what about 1PW? Will all these bugs and issues kill it in a similar manner? Gee, if the Feds didn't engineer its death, they sure must be applauding all the help Agilebits is providing in accomplishing that goal.

    Come on folks. What business in its right mind would force a half-baked and clearly crippled product out the door just to make some artificial deadline? How many diehard fans will still be around to trumpet 1PW once it finally starts working the way it should. As for that deadline, where precisely was the fire? More to the point, is this botched release a case of incompetence or intent?

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    Hi @Griz,

    I moved your comment into an already existing discussion. We have three semi-active threads covering roughly the same ground, but I picked the one closest to the blog post you cited.

    Quite frankly, I'm not going to address all of your points. I will say that I'm very sorry that you've been bitten by bugs. Things that affected a small portion of users that we had missed during testing affect a large number of people even if they comprise a small portion. We are getting lots fixed, and I'd recommend that you discuss the particular ones where they come up.

    You, and anyone, can verify that our encryption is truly end-to-end in the real sense of the word. (A number of services have misused the term to mean "encrypted at every stage"). For me, "end-to-end" means that the it is encrypted by the sender with key, k, and decrypted by the recipient with key k, and only the sender and the recipient ever have any access to that key. Note that "sender" and "recipient" can be the same person with the data being sent over time or space.

    Your 1Password data is encrypted with a key derived from your Master Password. No-one, other than you has that Master Password or can gain access to the key without your Master Password. This is what makes 1Password end-to-end, and all of that is verifiable. You can see that our data format does what we say it does.

    I think that you have missed the story about PGP. There is the commercial product PGP, but there is also PGP/GnuPG the open source tool. The history goes back to the weirdness of some dubious patents. But the open source project hasn't been "bought" by anyone.

    Fifeteen years ago, I was a huge advocate of PGP. I was also in the perfect position to promote it. I was the Postmaster at a technical university in the UK. So I had a community that was technically sophisticated, smart, and I could run training sessions and set things up.

    The only thing I couldn't actually do was install or give any of this software to anyone, and export restrictions actually forbade me from giving it to someone who wasn't a US citizen or permanent resident. Indeed, a British colleague, Pete, and I put on a little "show" about this. He would hand me a disk with PGP or SSH or whatever we were helping someone set up on their PC, and I would start to hand it back to Pete. But right before actually handing it back to him one of us would "remember" that I wouldn't be allowed to do that. Nor would I be able to give it to the person who was our (captive) audience for this show. Before it got too annoying, Pete would "discover" that he actually had a second disk with him, would proceed with the installation.

    Anyway, back to my failed prediction about how everyone would be using PGP "in a few years". The problem with PGP isn't just that it wasn't "user friendly"; the problem is that to use it correctly a person needs to understand subtle concepts. The distinction between "trusting someone as an introducer" versus "trusting that a key belongs to a particular individual" is just one of the many things that people need to understand to understand the "web of trust" and so would know which keys to "trust" or even to sign. There simply isn't a way to use PGP securely without understanding that, although I did try to set up a university-wide keychain to try to deal with that.

    So PGP/GnuPG is alive and well. But it is used by a very small portion of people because it can't be used properly without people having to learn some counter-intuitive concepts and subtle distinctions.

    1Password, of course, has lots of subtleties to it, as does any well-designed encryption tool. But fortunately, people don't need to understand those to be able to use it properly. Sure we hit the occasional confusion about encryption passwords versus authentication passwords, but things like that only rarely come up in normal usage. So the "mental model" that 1Password presents a user is much much simpler than what is really going on, but the discrepancies don't lead to insecure behavior. (And we do try to provide explanations of those subtleties to those who are curious.)

    Cheers, -j

This discussion has been closed.