Reducing attack surface?

wkleem
wkleem
Community Member
edited April 2017 in Lounge

You may or may not have heard that Microsoft Office Word docs can be used as malware booby trap?

A security researcher discovered this and alerted Microsoft which kept quiet for 6 months up to Feb/Mar 17 while working to resolve it. Once McAfee got hold of the information and detailed the information, the hackers knew about it too but did not know 6 months earlier in Oct 2016.

This flaw is in every Microsoft Word version that accepts macros.

https://wired.com/2017/04/security-news-week-microsoft-word-zero-day-left-folks-scrambling/

https://thenextweb.com/security/2017/04/10/microsoft-office-word-malware/

"The vulnerability was first discovered by researchers at McAfee, who detailed the bug in more detail last Friday. Since then, fellow cybersecurity firm FireEye published another blog about the same vulnerability, informing it had been withholding disclosure until Microsoft has had a chance to fix the glitch."


1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided
Sync Type: Not Provided

Comments

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    What's old is new again

    Word file macros used to be an extremely common attack vector. This was in the bad old days before Microsoft starting taking more responsibility for security (and it has been doing an excellent job since that shift). I have long ranted that word processor files should only be sent among co-authors. They are not suitable for document exchange to those who are not co-authors of the document. Otherwise send something like a PDF.

    I had added to my rant that PDFs were much much safer than Word files. But much more recently bugs in PDF readers (particularly Adobe's) had switched that around. A few years ago it felt like every week brought another security bug in Acrobat. Fortunately that has changed to.

    Anyway, don't allow macros in Office files unless you have a very good reason to. I think that that advice has stood the test of time even if so much of my other advice hasn't.

    Fixing bugs can take time

    Some bugs (security or otherwise) can be fixed relatively quickly, but others cannot be. How easy or hard it is to fix something is often not correlated to the severity of the bug. So I am pretty sympathetic to Microsoft here. In the days before responsible disclosure became a norm, vendors would often not fix security vulnerabilities at all. They would rely on the people reporting the bugs to keep the existence of the bugs secret. And people would keep them secret. So some bugs existed for many many years, during which time they may also have been discovered by bad guys as well.

    Deadlines with extensions

    So a new practice was developed. Those who reported bugs would give the vendors a dead line: We will go public with the bug after 90 days whether you have fixed it or not. And this has been the practice more or less since the 1980s, and it works reasonably well. But 90 days often isn't enough, or six months even as some bugs are harder to fix. The purpose of giving the deadline was to prevent the vendor from just ignoring the bug, so in many cases if the people reporting a bug believed that the vendor was working in good faith to fix something, the reporters might extend the deadline. Not everyone does this, but it is common enough.

    Some fixes break things

    Some fixes break other things. For example until about a year ago, we weren't parsing web addresses as safely as we should have been. Although it wasn't entirely clear how this could be exploited, it is the kind of thing that sooner or later someone will figure out how to exploit. So moving to stricter parsing had been on our security to-do list. Anyway, we introduced stricter parsing into a beta of 1Password for Mac, and it lead to a disaster, and the fix had to be withdrawn immediately.

    It turns out that something like www.nytimes.com is a syntactically valid URL, just not the way that you think it might be. It turns out that when you parse that using a parser that is designed from the formal specification of URLs, it will treat the www.nytimes.com as the path part of a URL, not the host part. We really need the https:// portion for that to work as expected. Our sloppy parser did treat things like that as the host portion of a URL, and so over the years many people had in their 1Password data Logins that listed websites in that form. Once we moved to proper parsing all of those failed to fill.

    So a simple shift from sloppy parsing to strict (and safe) parsing introduced lots of unacceptable errors. What had been a seemingly simple, "Hey, we should be using NSURL instead of our regular expression" turned into to something we had to immediately withdraw and then work out a whole algorithm and body of test data to make sure that we consistently (and safely) got the results that we want. So it took a few more months before this supposedly simple security improvement could actually be deployed.

    Now I don't know the details of the case that you report, but I could imagine that the simple fix for the Microsoft macros might mean that tens or hundreds of thousands of documents that behaved correctly yesterday would break tomorrow. So fixing things may be a very complicated task that needs more than six months.

This discussion has been closed.