Travel services company Sabre Corp., acknowledged this week that it’s in the middle of investigating a data breach in its Hospitality Solutions reservation system that may have spilled personally identifiable information and payment card data belonging to its customers. The Texas-based company disclosed the breach Tuesday in a quarterly 10-Q filing with the Securities and Exchange Commission. Related PostsThreatpost News Wrap, April 28, 2017
April 28, 2017 , 10:28 am
IHG Confirms Second Credit Card Breach Impacting 1,000-Plus Hotels
April 18, 2017 , 2:15 pm
Threatpost News Wrap, March 17, 2017
March 17, 2017 , 11:00 am
According to the filing, attackers may have secured access to payment information contained in a subset of hotel reservations processed through SynXis, the company’s central reservation system. The platform, a cloud-based software as a service (SaaS) solution, allows employees to access room pricing, scheduling, and availability at participating hotels. It’s unclear exactly what the company means by a “subset of hotel reservations.” According to marketing materials on Sabre’s site, SynXis is used at more than 36,000 properties. According to iDataLabs, the platform is used by nearly 500 hospitality companies, including the Kimpton Hotel and Restaurant Group and the Commune Hotel and Resort group—now Two Roads Hospitality, to name a few. The company said that unauthorized access to the system had been shut off and that there’s no evidence of “continued unauthorized activity at this time.” Sabre did not get into details of the breach, such as when it began, when it was mitigated, how an attacker may have gotten access to the system, but acknowledged that the compromise of “PII, PCI, or other information” could be a risk. Sabre said in an accompanying press release on Tuesday it has contacted law enforcement and hired cybersecurity firm Mandiant to assist in its investigation. The fact that SynXis is a cloud-based platform puts the onus on developers behind SaaS services to better secure their products, experts say. “Clearly, the surface area that is potentially affected is huge,” John Martinez, VP of Solutions at Evident.io said Tuesday night. “A breach of this magnitude underscores the need for SaaS services, especially those hosted on cloud providers, to increase their security posture capabilities at a faster rate. Not all cloud-borne vulnerabilities are covered by traditional security tools.” The breach at Sabre’s Hospitality Solutions division is the latest in long line of hospitality hacks over the past several years. InterContinental Hotels Group, a conglomerate that counts Holiday Inn and Crowne Plaza among its chains, announced it was looking into a breach, the second its disclosed this year, two weeks ago. Kimpton Hotel and Restaurant Group, a chain of boutique hotels, was breached last summer and is continuing to fight a class action case in the courts. On Tuesday the company moved to appeal a data breach suit ruling to the Court of Appeals for the Ninth Circuit. The post Sabre Corp. Investigating Breach of Reservation System appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/sabre-corp-investigating-breach-of-reservation-system/
0 Comments
There are now so many cyberattacks that many enterprises simply accept that hackers and bad actors will find ways to break into their systems. A strategy some large businesses have developed over the past two years has been to quickly identify and isolate these attacks, possibly by shutting down part of a system or network so the hackers won’t get days or weeks to root around and grab sensitive corporate data. This enterprise focus on rapid detection and response to various attacks on networks and computers doesn’t replace conventional security tools to prevent attacks. Instead, businesses are relying on both prevention software and detection software. What’s happened most recently is that security software vendors are developing means to evaluate attacks with advanced analytics. That analysis can be fed back into existing prevention systems to help thwart future attacks. Detection becomes part of a security cycle, at least in theory. “There’s a big focus on rapid detection and response in enterprises because prevention often misses the intrusions and malicious activities,” said Gartner analyst Avivah Litan in an interview. The focus started in earnest about two years ago following a big increase in data breaches at U.S. retailers, restaurants and hospitals. “Security officials woke up and realized with $80 billion spent [in 2014] on prevention, a lot of attacks were getting through,” Litan said. The main intent is to find attacks early “so that attackers won’t get in and sit around for six months and silently steal information, as most attackers do.” James Moar, an analyst at Juniper Research, said the modern state of cybersecurity has evolved. “There is no longer a reliable network perimeter than can be guarded, but rather a series of risks that have to be mitigated or exposed,” he said in an email. “In order to protect and secure such an environment, anomaly detection tools are the first step in determining if an attack is underway.” How detection helpsWhat typically happens when an attack is detected is that security managers will isolate it, often by confining the malware or other threat to a portion of a corporate network where as few endpoints (servers and computers) as possible can be attacked. For a large company, a network could be comprised of a number of combined smaller networks that can be arranged in a topology that allows many vital business functions to continue even when one portion is shut down. “Folks in security management are doing a lot more segmenting of their networks these days, so that if they detect something major, they can shut off a portion,” said IDC analyst Robert Ayoub, in an interview. An old deception approach, called a honey pot, is coming back into vogue in networks inside some security groups, he said. “Research organizations and some managed service providers will try to lure [attackers] in to see what attacks are being used. We have seen a lot of renewed interest in deception technology, although there’s not yet mainstream adoption.” Last fall, computer scientists at Penn State University described a decoy network approach to help deflect a hacker’s hits. The researchers created a computer defense system that senses possible malicious probes of the network. Then, attacks were redirected with a network device called a reflector to a virtual network which contained only hints of the real network. The researchers simulated the attack and the defense without using an actual network but plan to deploy it in an actual network. Detection software usually works by digging up anomalous behaviors. The most evolved detection systems work from a baseline of normal activity on a network or server, computer or other endpoint device, Litan said. A profile of normal behaviors by users, the amount and type of data transmitted in a system and other network activity are constantly compared with ongoing transactions using advanced analytics, she said. “These approaches might even look at a user’s activity relative to his colleagues to see if he’s doing something unusual,” she said. Recently, some security vendors have begun using machine learning to bolster the analytics. Here’s one example of how detection analytics might work: A procurement request made at 3 a.m. in Singapore by an employee based in London could be flagged as questionable. But the security system could check a corporate travel app and see that the employee had a flight and hotel booked in Singapore and then approve the procurement. Or, a totally different result might occur, depending on corporate policies, such as requiring a manager’s approval for the procurement. Detection productsDetection products are abundant and are being updated with the newest technology by nearly every security vendor, analysts said. “There are well over a hundred vendors in this space, including all the major names like McAfee, Cisco and Symantec, down to newer ones like Phantom,” Ayoub said. These products are deployed in the U.S. mainly by large banks, retailers, technology and defense-related companies, Litan said. Small and mid-tier companies have the option of hiring a managed service provider to provide detection services as part of a larger package of security products. Such service providers include large telecommunications companies, but also smaller cybersecurity firms like Cybereason and Crowdstrike, among others. Gartner divides the detection technologies used by enterprises into three relatively new markets that incorporate advanced analytics. Endpoint [threat] detection and response (EDR) was more than a $600 million market in the U.S. in 2016. User and entity behavior analytics (UEBA) was a $100 million market last year. Network traffic analysis (NTA) is a third new area, but Gartner didn’t provide an estimate for the size of that market. These newer detection markets can be compared to a much larger but older detection technology market called security information and event management (SIEM), which Gartner said reached about $1.6 billion in U.S. revenues in 2016. The major distinction between SIEM and the newer technologies is that SIEM is rules-based, while newer detection systems rely on advanced analytics which typicaly, but not always, include machine learning software, Litan said.) Advice to security teamsA combination of newer detection tools with older prevention tools is how large enterprises are typically addressing their security needs. “With security, there’s always room for improvement, and you’ll never solve all security problems,” Litan said. “You can’t only have prevention. You have to have detection, but there’s no silver bullet.” Jack Gold, an analyst at J. Gold Associates, agreed. “It’s not really one or the other,” Gold said. “If you can find a hack quickly and shut it down, then you’ve essentially prevented a breach. The best approach is one that’s layered with both prevent and detect. Just to have one or the other isn’t as secure as deploying both. Many vendors are moving in that direction as well.” Juniper’s Moar said it is “vital” for enterprises to have a detection tool that works well with their prevention and mediation software. “Having a tool that shows threats is useless if you can’t counter those threats,” Moar said. “Software that seeks out new connections on the company network, making them visible to security detection and remediation, eliminates this problem.” Before a company buys detection products, Litan said there are a series simple steps that can be taken to tighten up systems. That includes what may seem obvious: remove administrator privileges from end user accounts so that malware can’t be distributed throughout a system. “There’s a lot you can do before spending more on detection as you wait for vendors to get smarter. My main piece of advice is you make sure you work closely with the vendors and make sure you have their current version,” Litan said. Litan said vendors are working on developing automated detection tools that may eventually reduce a company’s heavy reliance on security analysts to track attacks. Even so, Ayoub said security remains an ever-expanding field that will continue to rely on people power. “If a security event happens, a company will start collecting data around it, which still requires certain skill sets that aren’t generally available. We still need security analysts to track this stuff down.” The post Face it: Enterprise cyberattacks are going to happen appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/face-it-enterprise-cyberattacks-are-going-to-happen/ A comment period has closed on NIST’s new password guidelines for federal agencies that challenge the effectiveness of traditional behaviors around authentication such as an insistence on complex passwords and scheduled resets. As more tech companies move away from passwords and toward multistep and multifactor authentication, and physical keys, NIST’s guidance accelerates the conversation for the U.S. government. The document also proposes that passwords be checked against blacklists of unacceptable credentials, including passwords already exposed in breaches, dictionary words, and repetitive or sequential characters. The overall marching orders, however, are to relieve user frustration caused by decades of memorizing an overbearing number of passwords to get your job done. “Mitigations such as blacklists, secure hashed storage, and rate throttling are more effective at preventing modern brute-force attacks,” the guidelines said. The final draft is ready for approval, and it’s especially timely after brutal 2016 when cache after cache of stolen credentials was made public, disclosing more than one billion credentials. The disclosures elevated debate to the highest levels over password reuse and the effectiveness of current authentication schemes. As more credentials were leaked, it became abundantly clear that passwords were ready to be put out to pasture as consumers and business users alike have to manage too many credentials and re-use them across internet-based services. “Users need to remember these passwords and if they’re overly complex or if they change too frequently, users will resort to writing them down,” said Scott Petry, CEO of Authenticat8, developers of a virtual browser called Silo. “That defeats the secret nature of the password. Or they’ll derive slightly different passwords on a common them and reuse them at set intervals. This creates a false sense of integrity.” Yahoo alone disclosed that nation-state actors and cybercriminals had accessed account information for more than 1 billion accounts, while LinkedIn, Twitter, Daily Motion, iMesh, VK, MySpace and many others reported lost credentials and in many cases forced a password reset for users. Compounding the problem is the fact that the average number of services registered to one email account for 25-34-year-olds is more than 40, according credit-checking firm Experian. And on average, users had only five different passwords for those accounts, Experian reported last year. “Many attacks associated with the use of passwords are not affected by password complexity and length. Keystroke logging, phishing, and social engineering attacks are equally effective on lengthy, complex passwords as simple ones,” NIST said. The rationale for frequent password changes, or certain length and complexity requirements, is the belief that this would make credentials more resistant to brute-force attacks, password-guessing attacks, and dictionary attacks. NIST said that minimum password length and complexity should depend on the threat model being addressed. Throttling the number of guesses, for example, is a substantial security measure against online attacks, while recommending salting and hashing to slow down offline attacks. “Glad to know that NIST understands that passwords are a nuisance and that adding more complexity and rules doesn’t make the lives of users any easier. These policies only increase the calls to the help desk for password recovery,” said neoEYED CEO Allesio Mauro. “Unfortunately, more and more frequently, the problem is that passwords are stored in the server in a wrong way or the connection the users adopt is not safe. I believe today that, whichever password you are actually using, is already in the hands of the hacker (or soon to be) and soon to be encrypted, so why even care about so many policies?” The post Proposed NIST Password Guidelines Soften Length, Complexity Focus appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/proposed-nist-password-guidelines-soften-length-complexity-focus/ Google has joined Amazon Web Services in promising customers of its cloud services that it will be compliant with new European Union data protection rules due to take effect next year. Neither company is fully compliant yet, but both have now made public commitments to meet the requirements of the EU General Data Protection Regulation (GDPR) by May 25, 2018, echoing a promise Microsoft made back in February. The GDPR replaces the 1995 Data Protection Directive. Among its biggest changes are requirements that companies: It’s not all extra work for businesses: There are some exemptions for small and medium-size businesses (SMEs), and the GDPR’s move to a single set of rules for all of the EU’s 28 (for now) member states puts an end to jurisdiction shopping — litigating privacy cases in the most favorable territory — and makes compliance simpler for companies working across borders. But some businesses will become liable in ways that they weren’t before: The GDPR applies not just to data controllers — typically those by or for whom the data was collected — but also to data processors, the service providers or middlemen that hold the data or perform the calculations on it. Their customers will want the rights and responsibilities of each party set out clearly before the new rules take effect. AWS Chief Information Security Officer Stephen Schmidt outlined the company’s progress towards GDPR compliance in a blog post on April 25. “I am happy to announce today that all AWS services will comply with the GDPR when it becomes enforceable,” he wrote. That surely prompted Wednesday’s blog post from Google Cloud’s director for security, trust and privacy, Suzanne Frey, and its director of data protection and compliance, Marc Crandall. “Google is committed to GDPR compliance across G Suite and Google Cloud Platform (GCP) services when the GDPR takes effect,” they wrote. But both companies were beaten to the punch by Microsoft Chief Privacy Officer Brendon Lynch. “Microsoft is committing to be GDPR compliant across our cloud services when enforcement begins,” he wrote on Feb. 15 in a blog post about the readiness of services such as Azure, Dynamics 365 and Office 365 for the the new rules. AWS is a little further ahead than Google, at least when it comes to the paperwork. The company has already revised its Data Processing Agreement to meet the requirements of the GDPR, and is making it available to customers on request, Schmidt said. Frey and Crandall could only say that Google Cloud has evolved its data processing terms over the years, and that they “will be updated for the GDPR as well.” Once again, Microsoft has trumped them: Lynch pointed readers to the GDPR pages of the company’s Trust Center, which now indicate that Microsoft made available contractual guarantees on data processing back in March. It’s a fairly safe bet that the big cloud service providers will ensure their compliance with the new regulation: Their business, at least in Europe, depends on it. But their customers operating in Europe still have work to do before the deadline. They’ll need to figure out (if they haven’t already) what personal information they hold about European citizens, update internal governance and procedures to determine who can access the data and how it will be protected, and prepare the documents needed to prove compliance with the new rules come May 25, 2018. The post Google echoes Amazon's assurance on EU data protection compliance appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/google-echoes-amazons-assurance-on-eu-data-protection-compliance/ Despite the USA Freedom Act of 2015, the NSA collected 151 million records of Americans’ phone calls last year, even though it had obtained warrants from the FISA court to spy on only 42 people suspected of having ties to terrorism. The NSA also complied with requests from government officials to reveal the identities of 1,934 U.S. persons ensnared in the foreign surveillance. The annual report, issued by the Office of the Director of National Intelligence, provides the first assessment of the effectiveness of the 2015 USA Freedom Act which was meant to limit dragnet surveillance of millions of Americans’ phone records. In 2016, 151,230,968 was the total estimated number of Americans’ call details records, meaning metadata about calls such as the number of the caller and recipient as well as the duration and time of the call, which the NSA received from providers and then stored in NSA repositories. We are perhaps supposed to feel better by knowing that the 151 million records of Americans gobbled up last year included multiple calls made from or to the same phone numbers. The real count of how many Americans phone records were collected minus the duplication issue is reportedly smaller, but the report doesn’t provide that number. The report comes on the heels of the NSA announcement that it will stop sifting through emails or other internet communications of people who are not targets of surveillance if an actual target is mentioned in the communications. The NSA said, “Instead, this surveillance will now be limited to only those communications that are directly ‘to’ or ‘from’ a foreign intelligence target.” The new transparency report reveals that the total number of probable cause court orders issued by the Foreign Intelligence Surveillance court was 1,559 in 2016. 1,687 is cited as the “good faith estimate” of the number of targets of those orders. Are we supposed to be impressed that the number of targets was eight less than in 2015? It’s still 125 more targets than in 2014. Information under FISA Title VII Sections 703 and 704 explains that a target doesn’t necessarily mean one person; it could also mean a “group, entity composed of multiple individuals or foreign power that uses the selector such as a telephone number or email address.” Of the estimated 1,687 FISA “probable cause” targets, 19.9 percent, or an estimated 336 targets, were Americans. Keep in mind that we established one target could be multiple people or groups. Congress is deciding whether or not to reauthorize FISA Section 702 which gives the NSA the thumbs up to collect information on Americans as long as they are communicating with a foreign target. The law is set to expire at the end of this year. In 2016, there were 106,469 targets of Section 702 orders; there were 94,368 in 2015. Once in 2016, according to the report, the FBI received and reviewed information that was acquired under NSA surveillance about an American. The report calls the data “Section 702-acquired information that the FBI identified as concerning a U.S. person in response to a query that was designed to return evidence of a crime unrelated to foreign intelligence.” NSA allowed almost 2,000 Americans to be unmasked Amid President Trump’s continued accusations of warrantless surveillance ordered by former President Obama, accusations that former National Security Advisor Susan Rice sought to learn the identities of Trump campaign officials before the 2016 election, the report revealed that the NSA complied with requests from government officials to reveal the identities of 1,934 U.S. persons ensnared in foreign surveillance. While the transparency report does give the number of Americans “unmasked” upon request last year, for the redactions meant to protect privacy to be removed, it doesn’t give details about who asked for the names or why. The post NSA collected 151 million records of Americans' calls, allowed 1,934 to be 'unmasked' appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/nsa-collected-151-million-records-of-americans-calls-allowed-1934-to-be-unmasked/ Take Google’s advice and get out of CA infrastructure’Mozilla has weighed in to the ongoing Symantec-Google certificate spat, telling Symantec it should follow the Alphabet subsidiary’s advice on how to restore trust in its certificates. Readers will recall that Symantec has repeatedly issued certs that didn’t ring true with browser-makers and at the end of April 2017 Google started a countdown, the conclusion of which would see its Chrome browser warn users if it encountered Symantec certs. Symantec offered up a remediation plan, mostly based on putting auditors through the joint. But it looks like that’s not sufficient for Mozilla. UK-based Mozilla developer Gervase Markham has posted his note to Symantec at Google Docs here. Mozilla strongly suggests that Symantec take a deep breath and swallow the bitter pills doctor that Google has prescribed here. Chief among Google’s suggestions is that Symantec work with one or more existing certificate authorities (CAs) to take over its troubled infrastructure and rework its validation processes. That would relegate the company to more-or-less reseller status, letting it maintain its customer relationships but relieving it of responsibility for ongoing operations. The alternative, Markham writes, is for Symantec to:
Why so harsh? The core of Mozilla’s argument is that it just doesn’t feel Symantec grasps how serious its issues are. As Markham writes, Symantec cannot establish that it “adequately demonstrates that they have grasped the seriousness of the issues here, and that their proposed measures mostly amount to doing more of what, in the past, has not succeeded in producing consistent high standards.†The reason, Markham writes, isn’t wrongdoing (so “we are not in StartCom/WoSign territoryâ€), it’s simply that Symantec seems to have lost control of its intermediaries. ® Sponsored: The post Mozilla Takes A Turn Slapping Symantec's Certification SNAFU appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/mozilla-takes-a-turn-slapping-symantecs-certification-snafu/ CSO Online | May 3, 2017 In the latest episode of Security Sessions, CSO Editor-in-Chief Joan Goodchild speaks via Skype with Michael A. Davis, the CTO of behavioral analytics company CounterTack. The two discuss why machine learning is so important to CSOs and CISOs, even if they themselves are not particularly investing in such technology for their own security systems. The post Security Sessions: Why CSOs should care about machine learning appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/security-sessions-why-csos-should-care-about-machine-learning/ In the beginning, devices on the internet were fun. My favorite was the Carnegie-Mellon’s Computer Science Department Coke Machine. Starting in the 1970s, you could “ping” it to see if it had sodas ready and if they were cold yet. It was good, silly fun. Now everything except the cat* is hooked to the internet, and that’s not so funny at all. Oh, sure, some internet of things (IoT) devices are enjoyable and useful. I have an Amazon Echo in my bedroom and a Google Home in my kitchen. I use them every day. But I’m aware of their privacy problems. You should be too. For example, both devices are always listening to you. And when I say “always,” I mean every single second of every single day. In theory, they’re both just waiting for their activation phrases, “Alexa” and “OK Google,” respectively. In practice, that means they’re listening to you constantly. I’m not too worried about this. Unlike with Windows 10 Cortana, you can tell these devices to stop listening. Of course, they’ll be a lot less useful that way, but at least you have the option. No, what really concerns me about the IoT aren’t the new devices that are explicitly connected to cloud services, it’s the ordinary gadgets that are now listening in. Take, for example, my Vizio M50-C1 50-inch 4K ultra-HD smart LED TV. It’s a fine TV, but until recently it was tracking my viewing habits and sharing this information with advertisers. Vizio wasn’t the only TV company guilty of snooping. LG and Samsung have peeked into your viewing habits too. Even devices such as “smart” toasters — yes there is such a thing — can tell their vendors what time you make toast in the morning. Or, more seriously, a hacker camping in your internet connection can track your toasting habits to figure out when you’re not at home. You see, IoT devices tend not to have any security to speak of. Heck, even IoT security systems have been shown to be as secure as a lock made out of rubber bands. Leaving aside how much damage home IoT devices can do for their owners, IoT gadgets are becoming the agents of choice for massive distributed denial-of-service (DDoS) attacks. Who knew your DVR could help wreck a business over the internet? Hackers knew, that’s who! If that weren’t bad enough, IoT firmware tends not to be updated at all. Once someone finds a security hole — and it can be as brainless as a single administrative password for all devices — it’s open forever. Let’s say your gadget can be updated. IoT devices tend to be patched automatically by the maker. Do you really want to try to get a drink of cold water from your refrigerator only to be greeted by a “Update 32% complete” message? I don’t think so! I love gadgets. I really do. But when it comes to the IoT, I prefer most of my devices to be dumb. They just work better that way. * There are actually a lot of IoT cat devices. My calico, Mirabella Marvel, doesn’t like any of them.↩ The post The Internet of messy things appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/the-internet-of-messy-things/ The time to respond and mitigate DDoS attacks can be costly for companies, and some businesses can lose roughly $2.5 million on average per attack, a research report released today said. Neustar, an analytics firm that sees swathes of DDoS attack telemetry daily, boiled down some of the figures in a dispatch, its annual Worldwide DDoS Attacks and Cyber Insights Research Report, on Tuesday. The data was culled from a survey it carries out bi-annually to track the shifting trends of DDoS attacks; 849 of the 1,010 organizations it surveyed, 84 percent, had been hit by a DDoS attack. A slight tick more, 86 percent, of respondents, had been hit by a DDoS attack multiple times. Together, the 849 companies lost $2.2 billion in revenue over the last 12 months responding to the attacks, according to the report. One company, a video gaming firm based in the US that makes $1M an hour in revenue, said it was attacked between two to five times in the last 12 months. While the company is at the high end of the spectrum, Neustar claims each DDoS attack may have cost the firm between $12M and $30M to mitigate, assuming the attacks take three hours to detect and three hours to respond to. For 63 percent of the companies Neustar talked to, a DDoS attack can amount to a loss of $100,000 in revenue per hour. That’s up 13 percent, from 50 percent in 2016, the firm says. Attacks can cost even higher, $250,000 per hour, for 43 percent of organizations it talked to, according to the report. While it’s getting pricier to fix DDoS attacks, the firm said it’s taking longer to detect and deal with DDoS attacks as well. More than half, 51 percent of attacks the company has followed so far this year, have taken at least three hours to address. While on the whole, companies are still mostly finding out about attacks via their internal security teams, those numbers are down in 2017. Instead, there’s been a spike in organizations finding out about attacks through their customers. So far this year, 40 percent of respondents said they learned of a DDoS attack through their customer base, up from 29 percent last year. The figures come as DDoS attacks continue to grow and diversify. The second half of the report points that there’s been an uptick in the number of mitigations Neustar has seen its customers deploy during the past year, an increase in the average size of attacks its seen mitigated, and a higher average peak attack size. The firm claims attacks are leveraging multiple vectors, and using botnets to carry out larger packet per second traffic, something that results in larger average attack sizes. The company says multivector attacks, attacks that combine ICMP, UDP, and DNS, are clearly on the upswing. One of the biggest attacks the firm observed over the past few months exceeded 100 Gpbs and used UDP, multiple TCP, and ICMP. Attackers are becoming so motivated to bypass defenses they’ve begun adopting multiple attack vectors to get the job done. At the end of last year multivector attacks totaled 71 percent of the attacks the firm saw; that number is up to 81 percent over the last three months alone, according to the report. Only a small part of the report is devoted to Mirai, the botnet that ensnared more than 103,000 IP addresses and knocked dozens of sites offline last fall. The firm says activity relating to the botnet – for now at least – has been slower in volume and smaller in size over the year’s first three months. Mirai attacks haven’t gone away but aside from a 54-hour DDoS attack on a U.S. college in February, they haven’t commanded as many headlines over the last few months. While Neustar says it doesn’t expect these trends to hold, it’s possible that Hajime, an IoT botnet first uncovered a few weeks back, could be helping drag down those Mirai numbers. The vigilant malware, which helps close off vulnerable Telnet ports used by Mirai, had infected upwards to 185,000 devices by the end of April. “They are both competing for the same resources, so it’s a constant battle of good versus evil in the IoT landscape at the moment,” Travis Smith, a senior security research engineer at Tripwire, told Threatpost last month of the Mirai/Hajime battle. The post DDoS Attacks Can Cost Businesses Up to $2.5M Per Attack, Report Says appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/ddos-attacks-can-cost-businesses-up-to-2-5m-per-attack-report-says/ A new crawler released today by Shodan designed to find command and control servers has already unearthed 5,800 controllers for more than 10 remote access Trojan (RAT) families. The crawler, called Malware Hunter, poses as an infected computer beaconing out to an attacker’s server waiting for additional commands or malware downloads. Unlike passive honeypots and sinkholes, Malware Hunter is actively seeking responses from C2 servers by pretending to be a newly infected machine sending out a callback with system information. Shodan has already integrated the free crawler’s results into its searches, and partner Recorded Future has the data fed to its API and provides its customers with additional context around the threats. Shodan’s search engine is a favorite among security researchers; it scans the internet looking for open ports belonging to connected devices, including servers, routers and IoT devices. Malware Crawler, Shodan said, beacons out to every IP address as if they were command and control servers and anything that responds is considered a C2 controller. “What Shodan collects is a positive response,” said Shodan creator John Matherly. “All we’re saying is that based on our technology, we determined this looks like a C2. We don’t probe. We don’t want to send unnecessary amounts of traffic to the C2; we don’t want to tip them off. We just want to flag it and forward it to other organizations that are better doing forensic and investigative work.” Recorded Future and Shodan have been working for two years on this project and to date, it’s found thousands of controllers for more than 10 RAT families, including Gh0st RAT, njRAT and Dark Comet, notorious cybercrime and espionage tools. Gh0st RAT, in particular caught the researchers’ attention given that it’s primarily been a nation-state attack tool in APT attacks against government agencies, activists and other political targets. “We’ve found more than we expected,” said Daniel Hatheway, senior technical analyst at Recorded Future. “Especially on Gh0st RAT, which was shocking to us. We didn’t think it was as prevalent any more. We didn’t expect the number to be quite as high as it was.” The project decided to focus on detecting RAT command and control servers first, but it has also dredged up other types of malware, including instances of the ZeroAccess Trojan. The ZeroAccess botnet has in the past been responsible for spreading information-stealing and click-fraud malware. “It was easy to develop a proof of concept for RATs; it’s a straightforward interaction,” Matherly said. “You get a lot of bang for your buck in terms of how much effort it takes to find RATs.” The 10-plus signatures in use already ferret out behaviors that snare new versions of RATs. “We may not know it’s a new version right away,” Matherly said, “but it elicits a response from a C2.” Users with a free Shodan account will have access to an overview results generated by Malware Hunter. Recorded Future has integrated the results into its products along with other analysis providing additional contexts around a detection. The results, meanwhile, would have value to researchers and network admins alike. “A network admin could actually dump that list (of results) and be pretty confident they could block everything out of the gate,” Hatheway said, adding that something like this would proactively block phishing sites before campaigns are even launched. “In terms of raw numbers, we feel like it’s been way more than we ever expected to find,” Matherly said. “It’s one of those things where we said ‘Why haven’t we done this sooner?’” The post Malware Hunter Crawls Internet Looking for RAT C2s appeared first on Gigacycle Computer Recycling News. from https://news.gigacycle.co.uk/malware-hunter-crawls-internet-looking-for-rat-c2s/ |
ABOUT USFree, secure collections for I.T recycling and CESG approved data erasure for individuals, businesses and large-scale projects. I.T Asset Disposal | Computer Recycling | Re-marketing & Cashback | Secure Data Erasure. Archives
May 2017
Categories |