Article just published in Washington Post is saying 1password and others have security flaws

13468913

Comments

  • As a paying, longtime (and mostly happy) customer I do not like the attitude Agilebits is showing when being criticized for their security design. Your user base does not only consist of laymen. Take the criticism seriously.

    Using 1Password's own architecture paper as a simple heuristic: There is no mention of a memory handling design. Statements like the following now have been shown to be false:

    Your Master Password exists only in your memory. This fact is fantastic for security because it makes your Master Password (pretty much) impossible to steal.

    Also your comparison with JavaScript engines in Browsers is misleading (by omission), because we know since Feb. 20 that 1Password does not clear sensitive memory on MacOS and Windows either:

    JavaScript (...) offers us very limited ability to clear data from memory. Secrets that we would like the client to forget may remain in memory longer than useful.

    Constantly citing how hard alternative solutions would be is simply not enough. A password manager is not run of the mill software, but a central security component.

    So, Agilebits, what is your action plan? What are your commitments?

  • @jpgoldberg

    Regarding the problem with immutable strings: Would it be feasible to implement the small data structures handling the actual secrects in a language (like C) that allows you to do the memory management yourself, just for that restricted part of the code?

  • Great, thanks for taking the time to explain the situation in detail and the practicalities of security if someone already has access to live RAM on a server. Much appreciated :)

  • brentybrenty

    Team Member

    So your average user should have known to ask 'wait, do you mean when it's locked, or when it's shut down?'

    @arabbit: No, they shouldn't. That was the focus of Goldberg's comments above. That's why this is important.

    It is removed from local storage by the clients.

    How does travel mode achieve this but normal operation can't?

    Because with Travel Mode enabled, there is no data. Any vaults not marked "safe for travel" are removed completely. Even when on disk, with Travel Mode disabled, the data is encrypted. This discussion is about data in RAM, which was decrypted in order for it to be accessible, and the fact that it is not always removed immediately upon locking.

    Per your team, local memory is so difficult to exploit that it's essentially a non-issue, so why is it a problem if a browser does the same thing?

    I'm not 100% sure what you're asking here, but the browser is not completely local; almost certainly it's connected to the internet, communicating with remote servers, some benign, some not. So it seems to me that it's an entirely different matter. But I don't have the context of your question, so that may not fully cover it. If so, please let me know.

  • brentybrenty

    Team Member

    But malicious software, while a very important thing to combat, is not the only problem here. As things are now, there are poorly installed (using hombebrew) 1Password instances in some computers. Also at our office. These installations do not come with the SIP protections that macOS offers, but rather are running in regular user space and you can attach a debugger and dump their memory.
    Due to this fault, I can currently walk to an unlocked mac at our office with 1Password locked, and within 30 seconds, dump all the passwords to a file and copy them to the Internet. Without installing anything, and without privileged access. And exploits enabling memory dumps happen quite often, this homebrew thing just being one of them. I'm sure there's other ways to trick 1Password to run without Hardened Runtime enabled. And if I'm not entirely mistaken, this is the state of things on Windows and 1Password, unless you turn on 1Password running in privileged mode which then disables browser integrations.

    @tesmi: You're not wrong. But neither we nor Apple recommend or support that, for that very reason. The same happens with people who install Wireshark on their machines and break their own network security. We'd occasionally hear from someone that they were able to eavesdrop on local communications with the browser, but that only works because they'd explicitly allowed that to happen. We can't stop you from doing those things of course, but I think we should only be held responsible for our decisions, not yours.

    At the very minimum, I would expect 1Password being locked to mean that the means for unlocking the vault cannot be accessed without privileged access. Currently the means exist in user accessible memory.

    Not "user accessible memory", but your point stands. That was the focus of Goldberg's comments above, and to paraphrase arabbit, the average user should not have to think about what's in memory, or whether to lock or quit the app. That's on us, and hopefully we can find ways to improve the situation on the one hand without introducing other security issues.

  • brentybrenty

    Team Member

    I hate to sound like a broken record,

    @DeDefiance: Listen, if you'll bear with us doing the same, it's all good! :)

    but regarding keylogging, wouldn't U2F basically fix/prevent this completely? Even IF someone infected your entire computer, and they keylogged your master password, as well as compromised your TOTP somehow, they STILL wouldn't be able to get into the vault without the hardware key.

    Here's the thing: When we're talking about encrypted data stored on the device, authentication doesn't come into play. Otherwise you wouldn't be able to access your data at all without an internet connection. The protection there is encryption. Authentication is used when you connect to the server to sign in, get new data, etc. That's where a hardware token would potentially be useful. But if an attacker has access to your device, they don't need to authenticate and connect to a server to get the encrypted data; they can just try to attack it directly...or, more likely, since brute forcing it would be problematic, just capture it as you access it there.

    The relevance of this discussion on the ISE paper comes in after that. No matter what, data needs to be decrypted in memory for you to do anything with it. But when 1Password is not able to clear that memory immediately when locking, that gives an attacker inside your system a potentially larger window of opportunity to get it, until it is cleared by the OS frameworks' memory management.

    However, even if this was implemented, this would be completely circumvented due to the vault being in memory in plaintext.

    Correct. But again, if they're already in your base, stealing your data, having it remain in memory longer doesn't really help them. That's not to say we don't want to improve this. We do. But more as a matter of principle, because of the lock metaphor, as this doesn't open up a new attack vector.

    May I inquire as to why passwords can't be encrypted on a per password basis rather than just the whole vault itself?
    For example; Say you open 1Password, the passwords are all encrypted in memory, once a user clicks "Copy", it decrypts only that 1 password.

    That would be better, and it's something we'd like to do, along with getting more control over when memory is cleared. We just don't want to do that at the cost of the security benefits we've gained from not managing memory manually, as it's easy to make mistakes with that with complex software.

    This way, sure if you're infected the hacker will see this 1 password in memory, but not your whole vault. That seems like a much better solution to me but I'd be interested to hear any counter points to this.

    You're not wrong, but more realistically, an attacker that has a foothold in your system just needs to be a little more patient in that case. We do want to make it more difficult for the attacker in any way we can though, where it is not even more costly to the user.

    (I'm not sure on the limitations of C# since I'm not experienced with it. However, from friends that work in the game industry I can say for certain that some games are able to do this to protect themselves from cheat developers, so it's definitely doable.)

    It can certainly be done a number of ways, but again giving up memory safety has its own risks. When a game crashes, it's bad for the experience and can allow for exploitation. When security software crashes, there's a lot more at stake.

  • KeePass have a page explaining why it is technically not possible to scrub the memory - see the last line in the screenshot at the bottom.

    Any purported programmers or 'experts' on here need to explain how to implement effective memory scrubbing in a consumer GUI program (and not just quote 'other industries do this'). Don't just say "it's the job of 1Password to think about this" - they clearly have but dismissed it on the grounds of the additional (and more realistic / probable) risks it'd create and the loss of functionality.

    A real expert will tell you that security is a series of trade-offs.

    As for people mentioning Bitwarden being more secure - it isn't. In fact it's horribly vulnerable to all sorts of attacks - see their recent audit. The Bitwarden audit doesn't even go onto mention their use of Google Analytics which the sole author of the program promised to remove, but didn't. Guess who gets your passwords? Google.

    I will suggest an immediate change to the 1Password developers. When a user presses lock, restart the program. This'll work as a temporary solution to the LML problem.

  • derek328derek328
    edited February 2019

    @oneagilebits:

    Using 1Password's own architecture paper as a simple heuristic [...] Statements like the following now have been shown to be false:

    Your Master Password exists only in your memory. This fact is fantastic for security because it makes your Master Password (pretty much) impossible to steal.

    I completely agree with you. 1Password's responses have been extremely worrying to me as well.

    looking at 1Password's website, forums & security design paper now (which I have made local copies of), there definitely seems to be an abundance of things AgileBits said that seem to me as a legal layman may constitute violation of Canada's Competition Act, one of many regulations relating to false and misguiding marketing / advertising.

    ------------------------------------------------------------------------------------

    now, to respond to @jpgoldeberg's response to me:

    1 - why would you ever build 1Password 7, a security product, using a language that does not allow strong memory management?

    @jpgoldeberg: I've largely answered that question in https://discussions.agilebits.com/discussion/comment/493107/#Comment_493107 You didn't like the answer, but repeating the question isn't going to get you a different answer.

    Assuming you are the chief product security architect, were you personally aware of methods (like separating UI from core protected processes & development in Rust / C# etc) that would've improved protection (against DMA & other attacks) than what is sold & shipped to real customers today as of 1Password 7?

    If you knew your current design has an inherent weakness that cripples 1Password 7 & can leak our private information triggered by nothing more than a simple memory dump, why would 1Password still advertise itself as such: "If someone has not been given access to a vault, it is impossible in all practical terms for them to decrypt its data"

    Well, as the security paper has discovered, I feel it now looks like false advertising since our data can be found in plaintext.

    2 - why would you still then commit the entire 1password db into cache memory, especially since your team already knows 1Password 7 cannot provide secure memory management? that was a poor product design.

    @jpgoldeberg: I will defer to others on that question. (@MikeT ?)

    Would appreciate a properly supported technical & layman explanation as well, @MikeT. Thanks.

    3 - why does 1Password consider something as basic as memory & secrets scrubbing to be "unfeasible", even though it is entirely feasible, a functional necessity, and is in fact often a basic requirement in industries like finance.

    @jpgoldeberg: That brings up a question that I would also like to ask @DMeans about as well [...]

    I appreciate you reaching out to knowledgeable customers to learn better ways of securing 1Password, but you have not answered my question.

    Again, could you explain why your team would say memory scrubbing is unfeasible if your team does not appear to be well-rehearsed in this domain? If you need to defer, I'd like to hear from the original poster brenty as well.

    4 - why claim the secret key + master key combination to be "better" than two-factor authentication? sure, this combination may help if one has compromised AgileBit's servers,

    @jpgoldeberg: That is what it is designed to do. That is what it does.

    Actually not necessarily true, which the term "better" falsely implies. I'd have agreed if 1Password 7's secret key + master key combination is said to provide a different protection than 2FA (or e.g. may be better in some scenarios).

    For example, a (secret key + master key) does not take advantage of time sensitivity (it's always the same codes unless the user changes them), and a crippled AgileBit may technically issue a compromised client that dupes users to enter these keys to be sold or exploited at any time afterwards (until the user for some reason changes one of the codes). By comparison, 2FA codes are time sensitive: even if one was received, the malicious user must access an account which that specific 2FA code grants before it expires. This is already one scenario where 2FA may prove superior. But I digress.

    @jpgoldeberg: Note that 2FA would not solve either the problem of a server breach nor the problem of malware on the machine on which you unlock 1Password. So where the Secret Key solves one of two problems, 2FA solves neither of those two.

    Again, not necessarily true (see below & also the example above).

    2FA, if properly implemented, would have significantly reduced the attack vector of malware causing memory dumps on the device where we run 1Password 7. However, as of right now, any black hat can syphon our master key + secret key with a very simple malware-triggered memory dump, and already get full access to our private vaults via a browser - rendering all of your team's local data encryption on our end-points practically useless.

    5 - if 1Password 7 caches our master key, secret key, passwords & other stored data in such a way that can be extracted as cleartext via a simple memory dump (be it accidentally by telemetries or maliciously by malware etc) at any point the software is running, why build client-side encryption at all?

    @jpgoldeberg: Because we do aim to defend against the attacker who gains read access to a users disk. We have seen malware that scopes up 1Password data on disk [...] Encrypting data on disk addresses a clear and well defined (and likely) kind of attack.

    So, malware that specifically scopes up 1Password data on disk (a niche existence) is common enough to cause concern within your team, but.. general malware that causes memory dumps aren't? general software failures that trigger telemetry logs via memory dumps on a computer aren't?

    Even if I were only a layman, that'd hardly count as a logical explanation...

    6 - why didn't 1Password communicate to us in its marketing that the lock out function is, it seems, mostly just a UI / visual obfuscation mechanism right now.

    @jpgoldeberg: Thank you. You have raised a good and fair point. I do not have a good answer for this.

    I appreciate that. However, if you & your team already knew this, why would 1Password continue to advertise its vault locking mechanism as follows? "Among mechanisms that are cryptographically enforced are: • Unlocking a vault"

    Another example of false advertising?

    7 - why leave information in public view that's no longer accurate, stating that 1Password does not keep our master password in memory,

    @jpgoldeberg: Seriously? You want to to comb through the forum to fix or remove anything we've ever said that is no longer true? And you know as well as I do that if we removed such content, you or someone like you would be scream at us for trying to re-write history.

    I don't appreciate the final sentence, and that was very unprofessional of you. We are your customers, and are on the same team as you. However, changing 1Password from never storing our master password in memory to yes we do store it in memory is a major change (not to mention major vulnerability).

    So - Did 1Password (or your team) make reasonable effort to communicate this design change to us customers in a timely manner? If not, is it our fault to rely on your employees' communication in your official company forums?

    8 - why isn't 1Password committing to a critical security fix right now?

    @jpgoldeberg: Because the issue is no more of a threat to 1Password users today than it was a week ago.

    How is "we've suffered from this vulnerability for a long time" any sort of logical defense? Also, your company website still currently markets 1Password as such:

    "Someone who has access to your devices or backups won’t be able to unlock 1Password without your Master Password, which only you know."
    "Encrypted copies of your Secret Key are stored in your device backups and keychains to provide data loss protection."

    Both of these we now know are false, since the 3rd party security research paper found 1Password to store our keys in plaintext. Is this another case of false advertising that we need to be worried about?

  • @gazu What is vastly different about what Keepass is doing (which is described in detail in your screenshot) is that they're keeping all sensitive data encrypted in memory. They do have some data unencrypted in memory, but it's only passwords the users choose to reveal to work with, which is similar to how 1Password 4 used to work. Instead now 1Password 7 keeps the entire database and master password unencrypted in RAM.

    If 1Password 7 were doing what is described by that screenshot, I wouldn't have a problem with any of this. I didn't even realize it was possible to encrypt data in memory.

    Agilebits - please investigate this approach.

  • @tesmi: You're not wrong. But neither we nor Apple recommend or support that, for that very reason. The same happens with people who install Wireshark on their machines and break their own network security. We'd occasionally hear from someone that they were able to eavesdrop on local communications with the browser, but that only works because they'd explicitly allowed that to happen. We can't stop you from doing those things of course, but I think we should only be held responsible for our decisions, not yours.

    This is not something that is out of your scope for fixing. Wireshark is different - it absolutely does need privileged access to be able to listen to network interfaces in a way or another. And while the way they're making it happen is suboptimal, they have difficult choices to deal with.

    Your app, however does not ever need to be run outside the Hardened Runtime as described by Apple. You can either visually show that Hardened Runtime is not working (by turning the UI red and putting in big letters saying the memory is not secure or something) or you can flat out refuse to start if Hardened Runtime is not working or if SIP is turned off. Or something along those lines.

    Currently a mistakenly installed version of 1Password gives no indication whatsoever that it security is pretty close to a plain text file on your desktop if anyone has physical access to the computer. This is something that the app itself can easily detect and warn the user about.

  • Let's take a step back and look at the real exposure here.

    There's basically two exploits for this vulnerability: Active user (hands on keyboard) or malware.

    **Active User: **

    If an attacker has access to an unlocked running machine, and 1P is unlocked and has the passphrase in memory, and the attacker has sufficient time before detection, and elevated privileges, they can dump memory and access the passphrase. Is that a risk? Yep, but in that case it's game over anyway - they can get to, and do anything on them machine, including installing a keylogger. That means that even if 1P was locked, they will eventually get the master passphrase anyway.

    Now if we take out the elevated rights requirement, because, for example, someone bypassed the built-in OS protections to do a non-standard, unsupported install via homebrew, that user broke the security model anyway. I have a constant refrain with our internal users OSX is not UNIX. Leaving the sandbox, well, leaves the sandbox, and you have to accept the risks associated with it. That said, again, you have to have someone with physical access to an unlocked, running machine. The evil maid always wins.

    Malware

    Now this is different. An actor could develop targeted malware designed to capture and extract the master passphrase from memory. That requires other exploits to achieve (in order to get the appropriate rights), and is exactly how most banking malware works. But what's the incremental risk here?

    In order to use 1Password, at some point, it has to be unlocked. Once it is, the malware strikes and captures the credentials. From this standpoint doesn't matter if 1P's locked when you hit the lock button or not because once it's unlocked at all, again, it's game over. The malware will get the credentials during that window of vulnerability.

    Now in both cases there techniques that can make that more difficult. Apple uses a number of them to protect the FileVault passphrase. But if you give either malware or an actor access to a running machine, with the right privileges, your credentials are at risk.

    Browsers

    But now let's look at a third scenario - browser or session compromise (think man in the middle attack between the browser and 1P.com). This happens all the time on corporate networks, using SSL decryption solutions to monitor egress traffic. Some antivirus/antimalware software does the same thing in order to monitor your traffic. Adblockers have some level of access too (hence Google's recent struggle to balance security/privacy and adblocking in chrome). And so on. That's why when I created my account, I used a known-clean install of the OS with a basic browser (no add-ons) from a known clean network connection.

    I suggested a while back that Agile not only provide a full capability client, but also give us the ability to disable the web interface on our accounts completely because the browser is the single most compromised piece of software.

    I'll repeat, that moving to use only the browser is exactly the wrong solution.

    MFA

    Now as far as MFA goes to unlock the vault, that doesn't really protect against any of the scenarios above. It does protect against brute force attacks against the master passphrase, but increases the risk of catastrophic vault loss if the hardware key is lost or damaged. They work great for corporate credentials where there's a trusted central authority to revalidate and reissue if that happens, corporate VPN for example. They're lousy for general user populations without that backstop - the system either has to fall back to a less secure mechanism (which is what the attacker will force it to do), like using SMS, or it's a SOL situation like the recent cryptocurrency situation (assuming that it wasn't fraud, but you get my meaning). This is why banks still use SMS rather than app-based tokens (google authenticator, authy, etc). Because they know that customers will lose their device. And recovering from that situation is costly and inconvenient - especially if there's no local physical location available, as well at being at risk for social engineering. SIM hijacking ironically reflects both sides of that equation.

    LML

    As @jpgoldberg agreed in his response to my last post, the larger issue here is one of perception, expectations and trust, not security. The UI gives the impression that the vault is locked and out of memory when it isn't. People feel that trust has been broken, so regardless of any actual security impact, this situation is a business impact. I suspect that the internal LML thread is rather active, and we'll see movement in that direction. That's a good thing. It'll restore trust, and yes, provide incremental security improvements - but only if done deliberately to avoid introducing new ones. Ironically, from a business standpoint, it may be acceptable to mitigate the business risk by a quick fix even if it reduces security. The net risk to Agilebits would decline, but the risk to users would increase! I don't think they'll do that, it's just an illustration of how overall risk is managed.

    So as users how do we respond to this? Making sure you lock the machine before walking away is a mitigation. Staying off the seedy side of the internet is another. Installing 1P only via intended mechanisms, running anti-malware, making sure your system is patched and current, leaving OS security active, only obtaining software from trusted sources, and all the other basic blocking and tackling are still more.

  • I'm worried on this too...

    Is it not possible to at least clean up memory when in locked state? I understood that locked or not locked does not make a difference if you have access to the memory (which is not so difficult when you have your hands on the machine). Or am I missing something ?

  • @kdhooghe Cleaning memory is possible, KeePass does this. Even 1Password 4 did this. But not version 7. Just having a Windows kernel dump on your file system seems enough to expose all passwords in cleartext.

  • Thanks, thats what I understood. So why are we no longer doing it in v7? This is really scary... out of this article V7 comes out as the worst as it stores ALL passwords in memory together with the master password ? This must be a joke ...

  • @kdhooghe The joke is unfortunately on us.

  • derek328derek328
    edited February 2019

    at the end of the day, like some of us have started noticing, it seems 1Password's website / security paper all now appear to contain some degree of misguiding / incorrect information that seem like false advertising.

    just look at here, here, here, and here. very worrying indeed.

  • @DeDefiance:

    May I inquire as to why passwords can't be encrypted on a per password basis rather than just the whole vault itself?
    For example; Say you open 1Password, the passwords are all encrypted in memory, once a user clicks "Copy", it decrypts only that 1 password.

    @brenty:

    That would be better, and it's something we'd like to do, along with getting more control over when memory is cleared. We just don't want to do that at the cost of the security benefits we've gained from not managing memory manually, as it's easy to make mistakes with that with complex software.

    How could this work? When I read DeDefiance's suggestion, I thought, "Well, that's not realistic, because where does the decryption key come from? Either it is in memory (providing no security), or it isn't in memory (requiring the user to supply it)." But Brent says it would be a better approach. I'm probably missing something simple here. Would someone very briefly explain how this could work?

  • MikeTMikeT Agile Samurai

    Team Member
    edited February 2019

    Hi guys,

    @derek328,

    why would you still then commit the entire 1password db into cache memory, especially since your team already knows 1Password 7 cannot provide secure memory management?

    We don't decrypt the entire database and put it in memory when you start up the database, we decrypt content that we need to perform our operations and there are common use situations where the entire list of items has to be scanned.

    For Watchtower to work effectively, it has to search through the entire database, it can't run on top of an encrypted database, same for searching. You can't search for unencrypted text without first decrypting it.

    Reused Passwords as we all know is a very big problem, we have to read all items and find the reused ones and then there are other Watchtower features such as HaveIBeenPwned to compare passwords locally that are in the downloaded breached passwords database. There's also the accessibility support along with the WPF-powered UI (Windows Presentation Foundation, part of the Windows .NET SDK for developers to create interfaces), every string it reads is in memory. When we tried to prevent concealed fields from being read, it didn't stop the string from being read by the screen-readers, which also means these strings are also in their process memory.

    These are some of the reasons that 1Password 4 was "scored better" than 1Password 7, it had almost none of these features, it was well known to be slow when searching through OPVault database or trying to fill/save after unlocking. The more integrated the app is with your data, the harder it is to decouple them; 1Password is getting more involved to protect your data but to do that, it has to know your data.

    The biggest blocker to the true locked model is the inability to reset the strings that we put in memory when we want with 100% confidence, we have not found one.

    This is why we are working toward using Rust which we believe can give us that ability, there may be some unexpected things we find that won't give us the same 100% confidence.

    Right now, it is up to the C#/.NET system to run the garbage collector when we want it reset; however, even if we read one item at the time and try to reset memory and then move on, we have no way of ensuring that reset occurs at all, so we could've gone through the entire database and it would stay in memory as long as we keep running. We have no way of forcing the garbage collector to clean up when we want it. We tried various methods and none worked. The more memory your system has, the higher the chance it will never collect it. There's also other memory technologies in place that may not reset it; memory compression, superfetch, etc.

    That's the limitation of our current locked state; we are working toward this as Goldberg has mentioned but it is not a quick process, especially when we are trying to enforce locked to mean locked but we are also considering the possibility of terminating the process at lock (then how do you protect the secrets as it passes from one process to another process?), this has unexpected side effects. This would help with the most narrowest attack points, the memory dumps and/or DMA attacks that Goldberg mentions but as long as someone can read your memory, they can just wait until you unlock 1Password.

    For some operations, KeePass must make sensitive data available unencryptedly in the process memory. For example, in order to show a password in the standard list view control provided by Windows, KeePass must supply the cell content (the password) as unencrypted string (unless hiding using asterisks is enabled).

    We use ProtectedMemoryAPIs that KeePass also use but only for certain things because we found that the API uses your Windows account keys to encrypt, which meant any app in the account can just decrypt it with the same Windows key, which doesn't protect you in this case from the same issues we're facing here.

    But as you may point out, that wouldn't matter in a locked state and you would be correct, except that we have no accurate and secure method to reset these strings in memories once they've been read.

    @kdhooghe,

    Is it not possible to at least clean up memory when in locked state? I understood that locked or not locked does not make a difference if you have access to the memory (which is not so difficult when you have your hands on the machine). Or am I missing something ?

    You're not missing anything; it is not that easy to clean up memory. Even if we can clean up properly while locked, if you're in the compromised situation, the malware or criminal is not just going to "give up" and leave, he'll just wait for you to unlock and read the data.

    1Password is not a security tool for your computer, that's what your OS and anti-malware tools can do for you, we work under that and work to provide you with a tool to manage your passwords.

    For an example, in the event someone stealing your laptop (knocks on woods), you still have to ensure you're locking down the system when you're considering leaving it, even for a bathroom break. We can't protect in these narrow situations. What we can do is give you the tool to protect yourself when you're surfing the web and trying to randomize your passwords across the web while only having to remember one password instead of everything.

  • @MikeT

    We use ProtectedMemoryAPIs that KeePass also use but only for certain things because we found that the API uses your Windows account keys to encrypt, which meant any app in the account can just decrypt it with the same Windows key, which doesn't protect you in this case from the same issues we're facing here.

    Correct me if I'm wrong, but despite this weakness, this would prevent the scenario that many have brought up where there is a BSOD and the entire contents of everything 1Password has stored in RAM gets saved in plaintext in a crashdump - which could then be sent to a 3rd party as part of a telemetry package.

    We don't decrypt the entire database and put it in memory when you start up the database, we decrypt content that we need to perform our operations and there are common use situations where the entire list of items has to be scanned.

    For Watchtower to work effectively, it has to search through the entire database, it can't run on top of an encrypted database, same for searching. You can't search for unencrypted text without first decrypting it.

    I'm not advocating this as a solution, but theoretically if I were to disable Watchtower within the 1Password client, would that limit the amount of data that gets loaded into RAM to those passwords which I access directly?

  • brentybrenty

    Team Member

    How could this work? When I read DeDefiance's suggestion, I thought, "Well, that's not realistic, because where does the decryption key come from? Either it is in memory (providing no security), or it isn't in memory (requiring the user to supply it)." But Brent says it would be a better approach. I'm probably missing something simple here. Would someone very briefly explain how this could work?

    @nils_enevoldsen: We don't know yet. If we did, this would be a solved problem, and the whole industry would be taking advantage of the solution. But the idea is that using some different tools for different things may afford us the ability to manage memory to some extent when it really counts and let the OS do it in others. But, as illustrated by MIke's example above, great ideas don't always pan out:

    We use ProtectedMemoryAPIs that KeePass also use but only for certain things because we found that the API uses your Windows account keys to encrypt, which meant any app in the account can just decrypt it with the same Windows key, which doesn't protect you in this case from the same issues we're facing here.

    So we need to do the work and test the efficacy of any proposed solutions.

  • brentybrenty

    Team Member

    @derek328: If there is an error we need to correct, please let us know the specifics. Otherwise it just seems like you're trolling, not having read the answers to your questions.

  • brentybrenty

    Team Member

    @tesmi: It's certainly worth considering, but it's wholly unreasonable to complain that your security is broken when you're willfully breaking it yourself.

  • https://docs.microsoft.com/en-us/dotnet/api/system.gc.collect?view=netframework-4.7.2

    Wouldn't this allow you to force Garbage Collection so that unneeded data was cleared out of RAM?

  • @tesmi: It's certainly worth considering, but it's wholly unreasonable to complain that your security is broken when you're willfully breaking it yourself.

    Kind of. But how would a regular user know? It's in fact quite hard to figure out if a process is running within the Hardened Runtime or not. This is a common installation method even if it's not officially supported. Even the people working on the homebrew project certainly did not mean for this to happen!

    And from another angle - what if a malicious actor were to disable the Hardened Runtime on purpose. This, I'm pretty sure, can be done without elevated privileges. Then the user could happily keep using 1Password with no indication that its memory has been made user readable.

  • brentybrenty

    Team Member

    Kind of. But how would a regular user know?

    @tesmi: I really don't think I can agree with you on this count. A "regular user" doesn't depend on Homebrew to use 1Password (or even knows what it is). They download 1Password from the 1Password website.

    It's in fact quite hard to figure out if a process is running within the Hardened Runtime or not.

    That's why Apple goes to so much trouble to set things up so users don't have to worry about it.

    This is a common installation method even if it's not officially supported.

    We interact with thousands of customers per day. It isn't the case that this is common in the 1Password userbase, even if it may be within your particular circle.

    Even the people working on the homebrew project certainly did not mean for this to happen!

    Maybe not, but I'm not sure either of us are in a position to determine that.

    And from another angle - what if a malicious actor were to disable the Hardened Runtime on purpose. This, I'm pretty sure, can be done without elevated privileges. Then the user could happily keep using 1Password with no indication that its memory has been made user readable.

    They still need to have unrestricted access to the machine, either locally or through a remote connection. Especially on macOS, this doesn't happen by accident.

  • Maybe not, but I'm not sure either of us are in a position to determine that.

    They still need to have unrestricted access to the machine, either locally or through a remote connection. Especially on macOS, this doesn't happen by accident.

    Yes, this they would do. Or as you implied above, they could create a malicious installation channel installing your trusted, signed binary in a way that does not work with the Hardened Runtime. I can tell you that we have been using this installation method in parts of our organisation due to ease of automation. We are a paying customer of 1Password with several hundred users. Your attitude towards not wanting to fix real world issues is getting on my nerves. This is something that is exploitable right now. Perhaps not for all of the thousands of customers you interact every day, but in the very minimum hundreds of customers. And you seem not to care at all?

  • MrCMrC Community Moderator

    @tesmi

    I can tell you that we have been using this installation method in parts of our organisation due to ease of automation.

    You've opted for convenience over security. We all do at times. That's an acceptable choice, but doesn't eliminate responsibility for the consequences of that choice.

  • brentybrenty

    Team Member

    Yes, this they would do. Or as you implied above, they could create a malicious installation channel installing your trusted, signed binary in a way that does not work with the Hardened Runtime.

    @tesmi: I did not imply that. Modifying the binary would invalidate the signature. I guess I just don't understand why you're seemingly going out of your way to circumvent security.

    I can tell you that we have been using this installation method in parts of our organisation due to ease of automation. We are a paying customer of 1Password with several hundred users. Your attitude towards not wanting to fix real world issues is getting on my nerves. This is something that is exploitable right now. Perhaps not for all of the thousands of customers you interact every day, but in the very minimum hundreds of customers. And you seem not to care at all?

    Please clarify the specific issue you're facing, and/or the specific threat that you're concerned about. From your earlier comments, it sounded like a thought experiment. But if you have steps to reproduce a bug or security issue that needs to be addressed, I'll be happy to look into it for you.

This discussion has been closed.