If you want to shut out the overwhelming majority of vulnerabilities in Microsoft products, turn off admin rights on the PC. That’s the conclusion from global endpoint security firm Avecto, which has issued its annual Microsoft Vulnerabilities report. It found that there were 530 Microsoft vulnerabilities reported in 2016, and of these critical vulnerabilities, 94% were found to be mitigated by removing admin rights, up from 85% reported last year. This is especially true with the browser, for those who still use Microsoft’s browsers. 100% of vulnerabilities impacting both Internet Explorer and Edge could be mitigated by removing admin rights, Avecto reported. One bit of progress is that 109 vulnerabilities impacting IE 6 through 11 were reported in 2016, way down from 238 in the previous year. “Privilege management and application control should be the cornerstone of your endpoint security strategy, building up from there to create ever stronger, multiple layers of defense. These measures can have a dramatic impact on your ability to mitigate today’s attacks. Times have changed; removing admin rights and controlling applications is no longer difficult to achieve,” said Mark Austin, co-founder and CEO of Avecto, in a statement. Windows 10 was found to have the highest proportion of vulnerabilities of any OS (395), 46% more than Windows 8 and Windows 8.1 (265 each). Avecto found that 93% of Windows 10 vulnerabilities could be mitigated by removing admin rights. Microsoft Office was hit with 79 vulnerabilities in 2016, up from 62 in 2015 and just 20 in 2014. This data includes Office 2010, Office 2013, Office 2016 and the various applications. Removing admin rights would mitigate 99% of the vulnerabilities in older versions and 100% of those vulnerabilities would be mitigated in Office 2016, the latest version of Microsoft’s software. Office 365 was not included in the results. The admin rule also applies to Windows Server, where admin privileges would be more necessary and justifiable. Overall, 319 vulnerabilities were reported in Microsoft Security Bulletins affecting Server 2008, 2012 and 2016, and 90% could have been mitigated by the removal of admin rights. Avecto said this method of turning off admin privileges works alongside tools such as antivirus to proactively prevent malware from executing in the first place, rather than relying on detection and response after the event. It’s a shame that this message is being missed. Avecto has been issuing this warning for years, and it doesn’t seem like anyone is listening. The percentage of apps impacted seems to rise every year. Just three years ago the percentage of apps affected was 92%. This should be a no-brainer for most firms. I can understand why they might not turn off admin for workers, because the limitations will undoubtedly lead to more screaming from workers who find themselves restricted for some functions, including installing software. No one wants to increase the calls to the help desk. But Avecto has been issuing this guidance for years and it seems like no one is listening. The number of infections and breaches tells me that. This article is published as part of the IDG Contributor Network. Want to Join? The post IDG Contributor Network: 94% of Microsoft vulnerabilities can be easily mitigated appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/idg-contributor-network-94-of-microsoft-vulnerabilities-can-be-easily-mitigated/
0 Comments
US Cyber Command boss lays out plans for next decadeNSA and US Cyber Command boss Mike Rogers has revealed the future direction of his two agencies – and for the private sector, this masterplan can be summarized in one word. Kerching! Speaking at the West 2017 Navy conference on Friday, Rogers said he is mulling buying up more infosec tools from corporations to attack and infiltrate computer networks. At the moment the online offensive wing of the US military develops most of its own cyber-weaponry, he claimed, and he figures the private sector has plenty to offer. “In the application of kinetic functionality – weapons – we go to the private sector and say, ‘Build this thing we call a [joint directed-attack munition], a [Tomahawk land-attack munition].’ Fill in the blank,” he said. “On the offensive side, to date, we have done almost all of our weapons development internally. And part of me goes – five to ten years from now is that a long-term sustainable model? Does that enable you to access fully the capabilities resident in the private sector? I’m still trying to work my way through that, intellectually.” Businesses already flog exploits, security vulnerability details, spyware, and similar stuff to US intelligence agencies, and Rogers is clearly considering stepping that trade up a notch. For example, in 2013, it was revealed the NSA was buying up exploits from French company Vupen Security. Vupen has since shut down, and its founders started up a US-based business called Zerodium. That outfit offers security researchers huge sums of cash for details of security bugs in products, and last year offered $1.5m for a remote iOS 10 jailbreak exploit. With bounties like that being thrown around, you can bet the biz is charging its bug list subscribers healthy fees – and the US military, with deep pockets, will only be too happy to cough up, if it isn’t already. “I’m sure US companies are selling weapons to Cyber Command,” computer security guru Bruce Schneier told The Register. “After all, why wouldn’t they? We contract so much stuff out to private suppliers in the US military anyway.” In 2015, Cyber Command spent $460m on “a broad scope of services needed to support the US Cyber Command mission,” according to the US General Services Administration. The specifics of the contract weren’t released, but the winners were named as The KEYW Corporation; Vencore; Booz Allen Hamilton; Science Applications International Corporation; CACI Federal; and Secure Mission Solutions. Public/private partnershipsBringing the US private sector fully on board doesn’t just mean buying from them, but also working with them, Rogers explained. When it comes to critical infrastructure, Rogers said that he would like to see US Cyber Command and private IT security employees having “a level of integration where we have actual physical co-location with each other.” “How do we take advantage of that and integrate at that level?” he said. “Because as an execution guy, my experience teaches me that you want to train, you want to exercise, you want to simulate as many conditions as you can before you actually come into contact with an opponent.” Rogers also said he’s likely to see more help from the private sector on the defense side of online operations. He mentioned getting help on machine learning systems, something the head of Google-parent Alphabet isn’t too keen to supply. Strike Force CyberRogers also outlined his plans to put more online attack tools in the hands of more front-line troops over the next five or ten years. “We should be integrating [cyber] into the strike group and on the amphibious expeditionary side,” he said. “We should view this as another toolkit that’s available … as a commander is coming up with a broad schema of maneuver to achieve a desired outcome or end state. That’s what I hope.” He complained that at the moment, the decision to use online weaponry is too much like the use of nuclear weapons, “controlled at the chief-executive level and is not delegated down.” That should change in the coming years, he opined, and said he hoped to get them used on a tactical level. Rogers suggested that lessons should be learned from the US use of Special Forces units. These were previously carefully guarded and rarely deployed. But after the formation of the US Special Operations Command they became integrated with the regular army command structure. Rogers said he foresaw the same thing happening on the cyber front. “I would create Cyber Command much in the image of US Special Operations Command,” he said. “Give it that broad set of responsibilities where it not only is taking forces fielded by the services and employing them; it’s articulating the requirement and the vision and you’re giving it the resources to create the capacity and then employ it.” That might sound good, but Schneier pointed out that it would mean that the US might be making a rod for its own back. After all, these are not typical weapons to use, and they come with their own set of problems. “These are fundamentally fragile things,” Schneier said. “If you use a cyber weapon you have a very strong chance of rendering it unusable again. Do you want to give some second lieutenant the ability to do that?” A few good hackersRogers said that the training and retention of human talent was going to be essential in the years ahead, and so far Cyber Command isn’t having too many problems getting the people it needs, thanks to the unique nature of the job. “That’s a real selling point for us right now,” he said. “The self-image of this workforce is that they are the digital warriors of the 21st century. The way they look at themselves – we’re in the future, we’re the cutting edge, we’re doing something new, we’re blazing a path. Everybody responds well to that.” He said that he tells staff they can do things within Cyber Command that they wouldn’t be allowed to do outside of the military. That said, the force is bound by the Law of Armed Conflict, which limits attack choices to purely military targets. Cyber Command is currently staffed by about 80 per cent military and 20 per cent civilian employees, he said. By contrast, the NSA is about 60 per cent civilian and 40 per cent military. Getting civilian employees is slightly more difficult than getting qualified military staff, he said. Part of that is, no doubt, down to increased levels of security vetting involved. After all, they don’t want another Snowden in the ranks. ® Sponsored: The post It Looks Like The NSA Is Going To Be Buying Tons Of Exploits Again appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/it-looks-like-the-nsa-is-going-to-be-buying-tons-of-exploits-again/ SAN FRANCISCO – A panel of experts at RSA Conference 2017 suggested the process by which federal agencies decide whether to disclose or withhold software vulnerabilities should be codified into law. The National Security Agency has come under fire this past year about its vague policy to disclose vulnerabilities or retain them for intelligence gathering purposes, and experts said that is because the Vulnerability Equities Process is currently voluntary, not mandatory. The Vulnerability Equities Process was designed to help government agencies decide if a vulnerability it has obtained or discovered should be disclosed to the developer for patching, or withheld for exploitation by intelligence agencies, law enforcement or other for other purposes. Heather West, senior policy manager and Americas principal at Mozilla, said the process has been successful. “There are very well-established norms around vulnerability disclosure and they are evolving. A lot of people are talking about them — DHS, private industry, CERT — and following those best practices really makes sense,” West told the crowd at RSAC 2017. “We don’t need to reinvent the wheel around disclosure; we just need to make sure things are getting disclosed.” Rob Knake, senior fellow on the Council on Foreign Relations, said codifying the Vulnerability Equities Process into law wouldn’t lead to a substantial change in how it works, “but it would increase the level of trust in the process.” “There’s a lot of doubters out there that this process is in place,” Knake said. “I think making it a law, making it a requirement is a lot harder to argue that the federal government and federal employees are going to violate those laws and run those penalties. Right now, there are no penalties for an agency or for an individual who holds back that information. So, I don’t think it would have a substantial change, but it would increase the level of trust in the process.” West said trust, congressional oversight and other “fringe benefits” would come from an official law. “Right now the process is voluntary on the part of the federal agency. Some agencies take the position that all vulnerabilities they know of ought to go through the VEP, and I applaud that. Other agencies, in particular the FBI has been a little more reticent to put things through the VEP because they want to hold on to them,” West said. “From my perspective, the VEP process works so well because it is balancing a broad set of equities across the government — defensive, offensive — and if you’re deciding that on your own, I’m a little more concerned about it.” Susan Hennessey, fellow and governance studies and managing editor at Lawfare, noted that the elephant in the room is that a law would not only increase transparency and accountability, but could address the “concern about how his particular administration is going to wield the powers of the national security apparatus.” “I don’t think that’s a controversial statement to make; there are concerns. So now, I think there is potentially an additional appetite for some of the things that are working, wanting to place that additional protection of there being an actual law that means there’s not discretion within the federal government, there’s external accountability,” Hennessey said. “And so to set that the process is working and what really matters is additional public legitimacy, going to Congress is the only way we’re going to get that for this very strange political moment we’re in.” Vulnerability Equities Process oversightHowever, the experts could not come to a consensus about where the Executive Secretariat of the process should reside in government. The Executive Secretariat is responsible for overseeing the process, including notifying points-of-contact when it is determined a vulnerability should be disclosed and compiling year-end reports. The position currently is part of the NSA’s Information Assurance Directorate, but Hennessey and Knake thought it should be moved to the Department of Homeland Security. “There has been a really strong reliance, particularly over the past few years on the DHS, because the DHS has a really good relationship with the public, has a really good relationship with the public sector, has a reputation for prioritizing privacy,” Hennessey said. “NSA has… struggled a little bit on some of those, admittedly.” Knake said, “If you’ve got DHS, which is growing capability in vulnerability research, which is oriented towards defense, if you’re saying this is severely biased towards the defense, it makes more sense to put it at DHS than NSA.” But Neil Jenkins, director of the Enterprise Performance Management Office at the Department of Homeland Security — who declined to officially comment on codifying the Vulnerabilities Equities Process into law — thought oversight for the process should take a different approach. “I feel uncomfortable with DHS in that role as well. If we are going to be forward leaning into the vulnerabilities that we release …it puts us in a bad position if we’re the Executive Secretariat over the process. We want to maintain the trust relationship we have with our partners,” Jenkins told the audience. “If we want to move away from the NSA in that position, I think we should look at a more interagency approach. But I think putting the Executive Secretariat role in any place that has a default position on this then puts them in a bad position going forward.” The post Experts: Government Vulnerability Equities Process should be law appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/experts-government-vulnerability-equities-process-should-be-law/ In a fresh analysis of the Shamoon2 malware, researchers from Arbor Networks’ Security Engineering and Response Team (ASERT) say they have unearthed new leads on the tools and techniques used in the most recent wave of attacks. Shamoon2 surfaced in November, approximately four years after the original Shamoon was used in attacks against Saudi Aramco, a national petroleum and natural gas company based in Saudi Arabia. Like the original Shamoon malware, the updated version also destroys computer hard drives by wiping the master boot record and the data. Shamoon2 also targets petrochemical targets, but also the Saudi Arabian central bank system, according to reports. However, up until last week, researchers were still searching for basic answers to questions about how Shamoon2 infects its hosts and its backend infrastructure. Neal Dennis, cyber threat intelligence analyst at Arbor Networks, said that thanks to third-party research the ASERT team was able to answer new questions regarding Shamoon2. “It is our hope that by providing additional indicators, endpoint investigators and network defenders will be able to discover and mitigate more Shamoon2 related compromises,” Dennis wrote in a blog post explaining his research. Last week IBM’s X-Force reported how Shamoon2 was infecting hosts. In its report, X-Force said document-based malicious macros were used as means of initial infections. Emails sent to targets included a document containing a malicious macro that, when approved to execute, enables command and control communications to the attacker’s server via PowerShell commands. Next, attackers use that access to deploy additional tools and access further network resources. Attackers then download and deploy the Shamoon2 malware. Using X-Force’s research as a springboard, Dennis said ASERT was able dig deeper and conduct a first-time analysis of the Shamoon2 backend infrastructure. By analyzing three X-Force malware samples, Dennis said he was able to trace them back to malicious domains, IP addresses, and other previous unknown Shamoon2 malware artifacts. ASERT said its analysis of the Shamoon2 show connections with Middle Eastern state-sponsored groups such as Magic Hound and PuppyRAT. That may not be a major revelation, considering in 2012 Shamoon malware was also linked to Middle Eastern state-sponsored groups. “Now we can begin to see who is behind Shamoon2 and how its backend infrastructure works,” Dennis said. Dennis said ASERT researchers were able to piggyback on X-Force’s research and cross-reference the malicious document author name “gerry.knight” and other IP addresses used by Shamoon2’s PowerShell to threat actors Magic Hound and PuppyRAT. “In this case, a sample from the IBM report indicated the document author was ‘gerry.knight,’” Dennis said. That led ASERT to three additional samples of documents used to distribute malicious macros unrelated to the Shamoon2 campaigns, Dennis said. Those samples matched existing documents used in Magic Hound campaigns. An additional clue was a “sloo.exe” file dumped by Shamoon2 in a targeted PC’s Temp folder. “The file was created at C:Documents and SettingsAdminLocal SettingsTempsloo.exe. In addition to this file, the sample also contacted 104.238.184[.]252 for the PowerShell executable,” Dennis wrote in a technical description of his research. He said that separate research by Palo Alto Networks attributed the “sloo.exe” file and also related activities to Magic Hound. Further analysis on IPs used by Shamoon2’s PowerShell also showed existing credential harvesting campaigns once used one the domain go-microstf[.]com which was originally set up to spoof Google Analytics login page. This spoof campaign, Dennis said, was active as recently as January, the timeframe of the last Shamoon2 attacks. “We have pulled a lot of related research together here and connected a lot of dots for the first time,” Dennis said. “This additional research will hopefully provide more context into the ongoing Shamoon2 threat.” The post Researchers Uncover New Leads Behind Shamoon2 appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/researchers-uncover-new-leads-behind-shamoon2/ In another strange tale from the kinetic-attack-meets-cyberattack department, earlier this week I heard from a loyal reader in Brazil whose wife was recently mugged by three robbers who nabbed her iPhone. Not long after the husband texted the stolen phone — offering to buy back the locked device — he soon began receiving text messages stating the phone had been found. All he had to do to begin the process of retrieving the device was click the texted link and log in to the phishing page mimicking Apple’s site. Edu Rabin is a resident of Porto Alegre, the capital and largest city of the Brazilian state of Rio Grande do Sul in southern Brazil. Rabin said three thugs robbed his wife last Saturday in broad daylight. Thankfully, she was unharmed and all they wanted was her iPhone 5s. Rabin said he then tried to locate the device using the “Find my iPhone” app. “It was already in a nearby city, where the crime rates are even higher than mine,” Rabin said. He said he then used his phone to send the robbers a message offering to buy back his wife’s phone. “I’d sent a message with my phone number saying, ‘Dear mister robber, since you can’t really use the phone, I’m preparing to rebuy it from you. All my best!’ This happened on Saturday. On Sunday, I’d checked again the search app and the phone was still offline and at same place.” But the following day he began receiving text messages stating that his phone had been recovered. “On Monday, I’d started to receive SMS messages saying that my iphone had been found and a URL to reach it,” Rabin said. Here’s a screenshot of one of those texts: The link led to a page that looks exactly like the Brazilian version of Apple’s sign-in page, but which is hosted on a site that allows free Web hosting. Rabin said he didn’t fall for the ruse, but that he imagines the scam would trick quite a few people who have lost their iPhone and are anxious to get it back. Leave the “icloud” off the end of that texted URL and we can see a phony copy of Apple’s “Find My iPhone” login page that is still live (the hosting provider has been notified): ![]() A “Find my iPhone” phishing page used by the robbers. But the scammers didn’t stop there in trying to phish the Apple ID and password for his iPhone account. Rabin said that just two days later, he received an odd, automated call on his mobile. “It came from a strange number and a voice sounding like Siri or the [Google] Waze voice, informing me that my iPhone had been found and to look for my SMS for more info,” Rabin said. “That’s when I thought I had to tell this story to someone. To me, it really got to another level, connecting the lowest kind of criminals to a high profile one (probably went to school and college) that can buy (or even create) this kind of scam.” The high cost of smart phones makes mobile device theft a serious problem everywhere in the world, not just Brazil. If you use an Apple device, it’s a good idea to turn on the “Find My iPhone” feature using the Find My iPhone App, so that when or if the device gets lost you can located it by signing into icloud.com/find. If your Apple device is lost or stolen, check out Apple’s advice on how to manage the loss, depending on the severity of the situation. In Rabin’s case, even though the phone is currently turned off, he has the options to put it in “Lost mode,” “lock it,” or “remotely erase it.” The next time your device is online, these actions will take effect. Also, try to make a habit of regularly synching your device to your computer, so that in the event your phone is lost or stolen your data is backed up and you don’t have to worry about remotely wiping important data that may not already be saved locally. Tags: apple, Edu Rabin, findmyiphone, iPhone
You can skip to the end and leave a comment. Pinging is currently not allowed. The post iPhone Robbers Try to iPhish Victims appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/iphone-robbers-try-to-iphish-victims/ Google’s Project Zero team has disclosed a potential arbitrary code execution vulnerability in Internet Explorer because Microsoft has not acted within Google’s 90-day disclosure deadline. This is the second flaw in Microsoft products made public by Google Project Zero since the Redmond giant decided to skip this month’s Patch Tuesday and postpone its previously planned security fixes until March. Microsoft blamed the unprecedented decision to push back scheduled security updates by a month on a “last minute issue” that could have had an impact on customers, but the company hasn’t clarified the nature of the problem. Some people have speculated that the problem might be related to the Windows Update infrastructure and not a particular fix, but the company pushed out a Flash Player security update on Tuesday, which suggests that if there was an infrastructure problem, it is now resolved. The newly disclosed vulnerability is a so-called type confusion flaw that affects Microsoft Edge and Internet Explorer and can potentially allow remote attackers to execute arbitrary code on the underlying system. “No exploit is available, but a PoC [proof-of-concept] demonstrating a crash is,” Carsten Eiram, chief research officer at vulnerability intelligence firm Risk Based Security, said via email. “This PoC may provide a good starting point for anyone who wants to develop a working exploit. Google [Project Zero] even includes some comments on how to possibly achieve code execution.” The Risk Based Security researchers have confirmed the potentially exploitable crash for IE11 on a fully patched Windows 10 system and have assigned a CVSS severity score of 6.8 to it, treating its impact as potential code execution. On Feb. 14, after Microsoft announced its decision to postpone the February patches, Google Project Zero disclosed a memory disclosure vulnerability in Windows’ GDI library. Another vulnerability that has yet to be patched was publicly disclosed three weeks ago by an independent researcher. The flaw is located in Microsoft’s implementation of the SMB network file-sharing protocol and can be exploited to crash Windows computers if attackers trick them into connecting to rogue SMB servers. The researcher who disclosed the vulnerability claimed Microsoft intended to patch it in February. So, at the moment there are three zero-day vulnerabilities in Microsoft products that the company might have planned to patch on Feb. 14 but didn’t. Some security researchers, including Eiram, believe Microsoft should release the patches it has now instead of waiting. “Even if no exploits are currently available, Microsoft is gambling with their users’ security,” Eiram said. “If exploits do suddenly surface, Microsoft would likely have to release out-of-band security updates anyway, forcing customers to scramble to apply these fixes. It makes more sense to handle it in a proactive manner.” Software vendors’ commitment to monthly patch cycles is understandable as it serves their customers’ need to have some predictability about when security updates will need to be applied. However, Eiram believes that sticking to these cycles should never have a higher priority than getting security fixes out in a timely manner. “Microsoft has always reserved the right to release out-of-band security updates when necessary, and even with no exploits available it is necessary now,” he said. “There are three known, unpatched vulnerabilities and at least one of them has code execution potential.” The post Google discloses unpatched IE flaw after Patch Tuesday delay appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/google-discloses-unpatched-ie-flaw-after-patch-tuesday-delay/ The new chairman of the U.S. Federal Communications Commission will seek a stay on privacy rules for broadband providers that the agency just passed in October. FCC Chairman Ajit Pai will ask for either a full commission vote on the stay before parts of the rules take effect next Thursday or he will instruct FCC staff to delay part of the rules pending a commission vote, a spokesman said Friday. The rules, passed when the FCC had a Democratic majority, require broadband providers to receive opt-in customer permission to share sensitive personal information, including web-browsing history, geolocation, and financial details, with third parties. Without the stay, the opt-in requirements were scheduled to take effect next week. But critics have complained that the rules only apply to ISPs, and not to giant online companies, like Google and Facebook, that collect huge amounts of personal data. And the FCC rules hold ISPs to a higher privacy standard than the case-by-case privacy enforcement that the Federal Trade Commission uses when investigating other companies, critics say. Supporters of the strong ISP privacy rules say broadband providers have huge opportunities to collect customers’ personal information. And U.S. law gives the FCC little authority to regulate the privacy practices of companies that aren’t network service providers. “Chairman Pai believes that the best way to protect the online privacy of American consumers is through a comprehensive and uniform regulatory framework,” an FCC spokesman said by email. “All actors in the online space should be subject to the same rules, and the federal government shouldn’t favor one set of companies over another.” Republican Pai has promised to roll back many of the regulations passed while Democrat Tom Wheeler served as FCC chairman. This week, the FCC voted to roll back some net neutrality regulations that require broadband providers to inform customers about their network management practices. Pai’s decision to stay the privacy rules goes against U.S. law requiring the agency to protect customers of telecom networks, said Matt Wood, policy director at digital rights group Free Press. “It’s a tragedy that Chairman Pai is willing to ignore his own statutory mandate and delay rules that protect internet users from cable company abuse, while pretending that he’s just chasing after a more comprehensive privacy law that’s outside of his agency’s congressional jurisdiction,” Wood said by email. “The race-to-the-bottom mentality that Pai espouses may play well to the industries supporting him, but people will understand that Pai’s fake promise of better rules tomorrow just means stripping away all protections today.” Pai’s decision, however, earned praise from former Representative Rick Boucher, a Democrat who has criticized FCC regulations in recent years. The stay is “a smart first step toward rolling back asymmetrical regulation that is at odds with consumers’ privacy expectations, deters innovation and causes marketplace distortion,” said Boucher, now honorary chairman of the Internet Innovation Alliance, a broadband advocacy group. “Applying different privacy rules to the same online data by saddling only ISPs with new regulations doesn’t make sense,” Boucher added by email.
The post FCC puts the brakes on ISP privacy rules it just passed in October appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/fcc-puts-the-brakes-on-isp-privacy-rules-it-just-passed-in-october/ For months, a bug in Cloudflare’s content optimization systems exposed sensitive information sent by users to websites that use the company’s content delivery network. The data included passwords, session cookies, authentication tokens and even private messages. Cloudflare acts as a reverse proxy for millions of websites, including those of major internet services and Fortune 500 companies, for which it provides security and content optimization services behind the scenes. As part of that process, the company’s systems modify HTML pages as they pass through its servers in order to rewrite HTTP links to HTTPS, hide certain content from bots, obfuscate email addresses, enable Accelerated Mobile Pages (AMP) and more. The bug that exposed user data was in an older HTML parser that the company had used for many years. However, it didn’t get activated until a newer HTML parser was added last year, changing the way in which internal web server buffers were used when certain features were active. As a result, internal memory containing potentially sensitive information was being leaked into some of the responses returned to users as well as to search engine crawlers. Web pages with the sensitive data were cached and made searchable by search engines like Google, Yahoo and Bing. The leakage was discovered almost accidentally by Google security engineer Tavis Ormandy while he worked on an unrelated project. As soon as he and his colleagues realized what the strange data they were seeing was, and where it was coming from, they alerted Cloudflare. This happened on February 18th. Cloudflare immediately assembled an incident response team and killed the feature that was causing most of the leakage within hours. A complete fix was in place by February 20th. The rest of the time, until the incident was publicly disclosed Thursday, was spent working with search engines to scrub the sensitive data from their caches. “With the help of Google, Yahoo, Bing and others, we found 770 unique URIs that had been cached and which contained leaked memory,” said John Graham-Cumming, Cloudflare’s CTO, in a blog post. “Those 770 unique URIs covered 161 unique domains.” A URI (Uniform Resource Identifier) is a character string that identifies a resource on the web, and is sometimes used interchangeably with the term URL (Universal Resource Locator). According to Graham-Cumming, the leakage might have been going on since September 22, but the period of greatest impact was between February 13 and February 18, when the email obfuscation feature was migrated to the new parser. Cloudflare estimates that around one in every 3.3 million HTTP requests that passed through its system potentially resulted in memory leakage. That’s about 0.00003 percent of all requests. Even so, because of the nature of the exposed data the incident was very serious and Cloudflare customers might decide to take action, like forcing users to change their passwords. “I’m finding private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings,” Ormandy wrote in an entry on Google Project Zero’s bug tracker during the incident. “We’re talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.” This bug is similar in its effect to the HeartBleed vulnerability in OpenSSL, which could have allowed attackers to force HTTPS servers to leak potentially sensitive memory contents. In fact, Ormandy even said that it “took every ounce of strength not to call this issue CloudBleed.” But unlike HeartBleed, which had the potential to expose SSL/TLS private keys, no such keys have been affected in the Cloudflare incident. “Cloudflare runs multiple separate processes on the edge machines and these provide process and memory isolation,” Graham-Cumming said. “The memory being leaked was from a process based on NGINX that does HTTP handling. It has a separate heap from processes doing SSL, image re-compression, and caching, which meant that we were quickly able to determine that SSL private keys belonging to our customers could not have been leaked.” One private key that was leaked, however, had been used to secure connections between Cloudflare machines. To be on the safe side, internet users might want to consider changing their online passwords, something they should do on a regular basis anyway to keep ahead of data breaches. “Cloudflare is behind many of the largest consumer web services (Uber, Fitbit, OKCupid, …), so rather than trying to identify which services are on Cloudflare, it’s probably most prudent to use this as an opportunity to rotate ALL passwords on all of your sites,” security researcher Ryan Lackey said in a blog post. The post Cloudflare bug exposed passwords, other sensitive website data appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/cloudflare-bug-exposed-passwords-other-sensitive-website-data/ Mike Mimoso and Chris Brook recap RSA and discuss the news of the week including the impact of Cloudflare’s “Cloudbleed” bug, Google breaking SHA-1, and more. Download: Threatpost_News_Wrap_February_24_2017.mp3 Music by Chris Gonsalves The post Threatpost News Wrap, February 24, 2017 appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/threatpost-news-wrap-february-24-2017/ The Cloudflare content delivery network for months has been leaking customer data, everything from private messages to encryption keys and credentials belonging to users of some of the Internet’s biggest properties. The vulnerability has been addressed, Cloudflare CTO John Graham-Cumming said, but not before sensitive data was exposed belonging to users of a number of web-based services including Uber, Fitbit, OK Cupid and others. Google Project Zero researcher Tavis Ormandy privately disclosed the issue last Friday to Cloudflare, which said that three “minor” features were to blame and had since been turned off. The first of the features, Graham-Cumming said, was turned on last Sept. 22, but he said that the time of greatest potential impact started Feb. 13 and lasted until Ormandy’s disclosure last Saturday. Ormandy said in a bug report posted to the Project Zero feed that he saw some unexpected data surface during an unrelated project. The data was uninitialized memory among valid data that he determined was coming from a Cloudflare reverse proxy. “It looked like that if an html page hosted behind Cloudflare had a specific combination of unbalanced tags, the proxy would intersperse pages of uninitialized memory into the output (kinda like Heartbleed, but Cloudflare-specific and worse for reasons I’ll explain later),” Ormandy said in his report. “My working theory was that this was related to their ‘ScrapeShield’ feature which parses and obfuscates html – but because reverse proxies are shared between customers, it would affect *all* Cloudflare customers.” The issue has been informally called Cloudbleed given its similarities to Heartbleed, a major OpenSSL vulnerability in 2014 that also leaked sensitive information in memory. Ormandy said it didn’t take long during an analysis of some live samples to see encryption keys, cookies, passwords, POST data and HTTPS requests for other Cloudflare-hosted sites among the data coming from other users. Ormandy shared what he had found with Cloudflare and yesterday disclosed in a tweet that the service was leaking customer HTTPS sessions including those from Uber, Fitbit, 1Password, OKCupid and others.
1Password quickly refuted that the Cloudflare bug affected its data, and said it designed 1Password to protect against incidents like this when TLS fails. An Uber representative said the impact against its users was minimal. “Very little Uber traffic actually goes through Cloudflare,” Uber told Threatpost. “Only a handful of tokens were involved and have since been changed. Passwords were not exposed.” OKCupid also said it’s investigating. “Cloudflare alerted us last night of their bug and we’ve been looking into its impact on OkCupid members. Our initial investigation has revealed minimal, if any, exposure,” an OKCupid representative told Threatpost. “If we determine that any of our users has been impacted we will promptly notify them and take action to protect them.” None of the other implicated services have made public statements. Meanwhile, there is a tracker available on Github listing some 4.3 million sites potentially affected by Cloudbleed. Cloudflare’s Graham-Cumming said that in some circumstances, the company’s edge servers ran past the end of a buffer and returned memory containing private information. He clarified that no customer SSL keys were leaked because SSL connections are terminated at an isolated NGINX instance. Graham-Cumming blamed an HTML parser present in three features for the leakage. He said that between Feb. 13 and 18, 1 in 3.3 million HTTP requests resulted in memory leakage, 0.00003 percent of all requests. Cloudflare said it replaced its Ragel HTML parser a year ago with a homemade parser called cf-html. The underlying bug, it said, was in the Ragel parser as well but was never triggered because of the way the NGINX buffers were used. The new parser, however, changed the buffering and caused the leakage. The three features using the parser: Automatic HTTP Rewrites (enabled Sept. 22), Server-Side Excludes (enabled Jan. 30), and Email Obfuscation (enabled Feb. 13) were globally disabled or patched upon learning of the bug. “Once we knew that the bug was being caused by the activation of cf-html (but before we knew why) we disabled the three features that caused it to be used. Every feature Cloudflare ships has a corresponding feature flag, which we call a ‘global kill’. We activated the Email Obfuscation global kill 47 minutes after receiving details of the problem and the Automatic HTTPS Rewrites global kill 3h05m later,” Graham-Cumming said. “The Email Obfuscation feature had been changed on February 13 and was the primary cause of the leaked memory, thus disabling it quickly stopped almost all memory leaks. “Within a few seconds, those features were disabled worldwide,” he said. “We confirmed we were not seeing memory leakage via test URIs and had Google double check that they saw the same thing.” A lingering issue is that search engines have cached the leaked memory, and Cloudflare is working with Google and other providers to scrub those leaks from caches. “We’ve been trying to help clean up cached pages inadvertently crawled at Google. This is just a Band-Aid, but we’re doing what we can. Cloudflare customers are going to need to decide if they need to rotate secrets and notify their users based on the facts we know,” Ormandy said on Sunday. “I don’t know if this issue was noticed and exploited, but I’m sure other crawlers have collected data and that users have saved or cached content and don’t realize what they have, etc. We’ve discovered (and purged) cached pages that contain private messages from well-known services, PII from major sites that use Cloudflare, and even plaintext API requests from a popular password manager that were sent over https (!!).” The post Cloudflare Bug Leaks Sensitive Data appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/cloudflare-bug-leaks-sensitive-data/ |
ABOUT USFree, secure collections for I.T recycling and CESG approved data erasure for individuals, businesses and large-scale projects. I.T Asset Disposal | Computer Recycling | Re-marketing & Cashback | Secure Data Erasure. Archives
May 2017
Categories |