Welcome to the Proxy Update, your source of news and information on Proxies and their role in network security.

Monday, August 30, 2010

SSL Proxy And Anti-Malware Go Hand In Hand

At first glance you may think that an SSL proxy and anti-malware have nothing to do with each other. While each serves its own purpose in a Secure Web Gatway architecture and deployment, they are actually crucial to each other's success in protecting an organization's network from web based threats, malware, and cybercrime.

Let's start with the SSL proxy. Having a web proxy without an SSL proxy used to be quite common, as few pages other than financial services had encryption protection. There was a time when a web proxy that handled pages in the clear covered almost all the web pages of interest for an organization's policy compliance. Today, webmail offerings routinely use SSL encrypted logins and even maintain SSL sessions for email sessions. SSL is also used today wherever personal credentials are entered, whether it's a social networking site, shopping or other entertainment site. Because of the widespread use of encryption on website, making sure you use an SSL proxy (basically a proxy that can inspect and enforce policy around the contents within an SSL session) is more important than ever.

At one time SSL proxy and inspection was important mostly for DLP (Data Leakage Protection). Organizations used it to make sure confidential data wasn't leaving the organization through secure encrypted sessions. Today it's important to make sure web threats don't enter through secure encrypted connections.

The key to providing security with SSL inspection is an anti-malware or anti-virus scanner. Traditional methods of content inspection like URL databases, and real time rating in the cloud are hampered by the user credentials usually associated with SSL. URL databases rely on generally available URLs and not the custom URL generated after a user credential is verified. Real time rating systems suffer from the same problem, as they rate pages they can reach, and secure web gateways generally don't send users credentials across the internet to a real time rating system to get the full contents of the URL, as this would generally be considered a security risk or even a security breach.

This leaves the only way to ensure the content within an SSL encrypted page is safe, is to use an anti-malware or anti-virus scanner locally at the proxy to inspect the data the SSL proxy is receiving as it's coming in from the Internet. If the anti-malware program detects any threats, the proxy can block the downloads and infected web pages. Without SSL proxy and anti-malware, threats buried in encrypted pages would pass into the organization's network.

A company using an SSL proxy should of course follow prudent guidelines around privacy concerns with regard to content found in SSL sessions. A common approach is to set up the SSL proxy to bypass visits to financial sites, so as not invade a typical end-user's privacy.

Any organization concerned with web threats, needs to implement an SSL proxy if they haven't done so already, and tied to that implementation needs to be a plan to get anti-malware scanning to be a standard part of the web gateway.

Monday, August 23, 2010

Why Do You Need a Proxy in the Secure Web Gateway?

In today's web based world, web threats are at an all time high. Whether it's an iFrame injection, a drive-by download, phishing, or just plain malware, end-users browsing the web are at a higher risk than ever before of having their computers and identities compromised. It's no surprise then, that more companies than ever are looking to implement a Secure Web Gateway, or updating their existing gateways.

For many the term Secure Web Gateway is interchangeable with the term proxy, but not all Secure Web Gateways are proxies. It's an important distinction to make, because originally Secure Web Gateways were implemented to enforce corporate or organizational policy (such as preventing shopping on the web during office hours), but in today's threat laden world, having a proxy in the Secure Web Gateway is more important than ever in the battle against cybercrime, malware and phishing.

By specifically requiring a proxy in the Secure Web Gateway, you're guaranteed to terminate all traffic at the proxy. This means when a client makes an http request, it goes to the proxy and the proxy responds acting as a server accepting the connection. The proxy then acts like the client and makes the same request the client made to the destination server. By forcing all traffic to terminate at the proxy, the proxy has the ability to inspect all the traffic flowing through the device, and makes sure no traffic flows through without inspection.

Alternative Secure Web Gateway deployments, such as TAP (or SPAN port) deployments, have the gateway sitting off to the side of the network, observing traffic as it passes by, instead of intercepting and terminating all traffic. These deployments have the specific flaw that malware or other threats can get by, if the gateway doesn't detect the threat in time or doesn't send out a TCP reset packet in time to disrupt the flow of traffic. It's not a guaranteed security mechanism. It may have worked okay for enforcing organizational policy, but it's definitely not a safeguard against web borne threats.

Today, the only true way to have full protection against web threats is to intercept all web bound traffic using a proxy architecture. Depending on the proxy vendor, your proxy device may also intercept and protect other forms of internet bound traffic like, ftp, telnet, and other protocols. Protecting your mission critical network from inbound threats should be a top priority, and you need to make sure your Secure Web Gateway processes all the traffic by using a proxy architecture.

Sunday, August 22, 2010

IE6 Still Used By 20% Despite Flaws

From: http://www.informationweek.com/blog/main/archives/2010/08/ie6_still_used.html

Summary

According to Zscaler's latest State of the Web report, one in five business users continue to browse with IE6, despite its being nine years old and far less secure than newer browsers.

Article

The latest State of the Web report from Zscaler holds plenty of interesting -- and scary -- insights into the threat environment, but one item in particular caught my eye.

According to the security firm's tracking of Web traffic, 20% of business users are continuing to use Microsoft's Internet Explorer 6, despite the browser's being seriously out of date, and seriously risky. While Zscaler's IE6 numbers are higher than some, it's clear that a large number of users continue to stick with the old browser, despite every encouragement -- not to mention need -- to upgrade or replace it.

At nine years old and counting, IE6 has been out of date and risky for a while. Over a year ago, Matt McKenzie described IE6 as a "Ford Pinto with a leaky fuel tank", and it's hard to top that -- except for the fact that another year has gone by and the leaky vehicle is still being driven by a lot of people.

Zscaler does see IE6 usage -- and Explorer usage overall -- declining. But the persistence of the browser says much about the dilemma of employees sticking with flawed, dangerous technology.

It's pretty easy to come up some obvious explanations for the browser's longevity. If you or your employees are still running IE6, ask yourself if any of these apply:

Budget: Your company bought machines with IE6 installed, and have never upgraded either software or hardware.

Inertia: IT is not a primary focus your company; if it's working, keep working with it.

Good enough technology: One of the non-security knocks against IE6 is that it's not up to the demands of the Brave New Web -- hence the number of apps that are dropping support for IE6. If your company isn't interested in the new Web, why should you invest the time required to upgrade your browsers?

Lack of awareness: A subset of both inertia and good enough technology, this one probably explains a large per centage of the holdouts. It's the same thing that explains why so many security remain unpatched long after patches are released.

Stubbornness: The best example of this is the UK government's recent decision to stick with IE6, explaining that it's "more cost effective in many cases to continue to use IE6 and rely on other measures, such as firewalls and malware scanning software, to further protect public sector Internet users." In other words, put a catchpan under the leaky gas tank, but keep on driving.

None of the explanations make much more than surface-level sense today.
With browsers of every variety rapidly becoming the attack vector of choice, holding onto an old, flawed browser that's unprepared for either today's threats or today's Web.

Time to retire IE6 from your business, if your business is one of the ones still running it.

And while you're at it, you might take a an age and ability check on all the other software you run.

Saturday, August 21, 2010

Malware Threats At Record High

From: http://www.itproportal.com/portal/news/article/2010/8/17/mcafee-warns-malware-threats-record-high/


McAfee has said that it has registered record levels of new malware threats over the first half of 2010.

The security company said in its quarterly report that it had been indexing 55,000 new malware threats every day during the first half of 2010.

McAfee suggests that the increased threat of malware is due to the rise of technological progress.

The company's director of security, Greg Day, suggested that the rise in malware could also be due to development of sophisticated malware generation tools.

He said that an increase in malware allowed hackers to exploit individuals and enterprises in new ways.

In a statement, Day said: “Now [there] is what we call malware generation tools [which] let you create different kinds of threats, but they can do it in hundreds and thousands of different guises.”

McAfee advised users to apply ethical hacking techniques to check the strength of their network and applications and fix flaws before rogue hackers can exploit their vulnerabilities.

Read more: http://www.itproportal.com/portal/news/article/2010/8/17/mcafee-warns-malware-threats-record-high/#ixzz0x0YxHVqM

Friday, August 20, 2010

$1 Million Stolen from UK Bank Accounts by New Zeus Trojan

From: http://www.spamfighter.com/News-14952-$1-Million-Stolen-from-UK-Bank-Accounts-by-New-Zeus-Trojan.htm

$1 Million Stolen from UK Bank Accounts by New Zeus Trojan

Researchers at M86 Security have disclosed about another botnet built on the Zeus Trojan named Zeus v3 which means swiping bank information from unnamed financial accounts in the UK. This ongoing attack is known to have stolen £675,000 or nearly $1.1 Million from customers during July 5, 2010 - August 4, 2010.

Security firm M86 has elaborated that in addition to the usage of Zeus v3 Trojan, cyber criminals are using the Phoenix and Eleonore exploit kits. These kits exploit victims' browsers to inject Trojans into their PCs.

The process began with corrupt banner advertisement placed on legal websites. Those users who followed the advertisement would be taken to a corrupt website containing exploit kits. Further, the users would be taken to the exploit kit and their computer systems would become infected, said the security researchers.

With the help of Zeus v3 on the victims' PCs, their online bank account and details such as date of birth, Id and a security number would be transferred to the command and control server. As the user entered the site's transaction portion, the Trojan would report to the C&C (command and control) system and receive new JavaScript to replace the original JavaScript from the bank. Once the user submitted the transaction form, more data was sent to the C&C system instead of the bank.

Bradley Anstis, Vice President of Technical Strategy for M86, threw light on the latest sophisticated attack. Anstis said that the initial infection where the exploit kit compromised the victim's machine used a number of vulnerabilities listed in the paper by them. One of the vulnerability was an Internet Explorer which affected IE v6 & v7," as reported by news.cnet on August 10, 2010.

However, one of the six or so vulnerabilities which could have been used for the initial infection. The victim machine is tested by the exploit kits for each one so as to get a successful infection.

In another statement, Anstis has concluded that the only way of protecting against such attacks within the browser is to implement real time code analysis technologies which can detect and block malicious commands proactively, reported by computerweekly on August 13, 2010.

» SPAMfighter News - 18-08-2010
Bookmark and Share

Thursday, August 19, 2010

Do You Really Need Anti-virus in Web Filtering?

The topic of anti-virus or anti-malware in the Secure Web Gateway is an issue that many organizations face when trying to deal with the onslaught of threats from the web. Traditionally web gateways include features such as proxy capability and URL filtering, and maybe even real time web page categorization to help with securing the organizations users from threats from the web and enforcing corporate policy.

The argument that many organizations face is that they are already paying for URL filtering, real time web rating, and a anti-malware program on the desktop. Why do they need to spend more to get anti-malware and anti-virus on the Secure Web Gateway? What, if any benefit does the end-user and organization get from adding anti-malware to the Secure Web Gateway, when the end-user is supposedly already protected by their desktop anti-virus program?

These are good questions, and ones the organization needs to look at carefully when making the decision to add anti-malware to the Secure Web Gateway. While an organization may have anti-malware programs running on their end-users desktops, they generally have little control over how often these programs are updated, or if they are even running (some end-users may have even disabled them to gain performance on their laptops or desktops). Would you trust your corporate security to your end-users? By relying on their desktop anti-malware, you're essentially relying on the end-user to make sure they are practicing the best cyber hygiene.

Maybe as as administrator you already trust that URL filtering and dynamic real-time rating are protecting you from web threats. While these two technologies are great as part of a layered defense mechanism, they each serve a distinct role in protecting the end-user and the organization. A URL filtering database provides the quickest way to provide feedback to an end-user on whether a website is safe. Known bad websites will already be categorized as malicious.

A website URL not found in the URL filtering database moves to the next layer of defense, typically cache of URL information found at a vendor's website, and then if still not found, a real time rating system that examines a website in real time to determine the category of the website. All these mechanisms drive toward determining not only the category of a website, but also whether or not that website has malicious content and then blocking it (or an embedded URL that contains malicious content) as appropriate.

All this sounds great, and many administrators may be lulled into thinking they are completely protected by this layered defense mechanism. But in reality they should add one more layer of defense, and that's the anti-malware/anti-virus scanning at the Secure Web Gateway. Why is this necessary? Think about what happens when a known good website gets attacked, and ends up with an infection of malware or virus. There's going to be a period of time before a URL database, or URL cache or even real-time rating system picks up on the infection. Until that information is updated, that website is being passed on as a "good" site. An anti-malware program at the gateway would add that layer of defense that would catch that the site has been infected and prevent the end-user from downloading a virus in that short window of vulnerability.

No infection is a good infection, and layered defense is a necessity with today's web threats. Make sure you close an additional window of vulnerability by adding anti-malware/anti-virus to your Secure Web Gateway. Adding a different vendor's anti-virus from your desktop anti-virus also adds another layer of protection, so that if one anti-virus vendor misses a threat the other has a greater chance of recognizing it.

Wednesday, August 18, 2010

A Powerpoint Presentation Explaining Proxies

In case you were looking for more materials on explaining why you need a proxy, and what a proxy does, I discovered a new powerpoint presentation along with complete speakers notes here:

http://www.authorstream.com/Presentation/smrutiprayag-475558-proxy-servers/

It goes through the basics and explains why and how to use a proxy for web and email.

SSL Proxies

Found this recently, a topic we've talked about in the past here at The Proxy Update, and one gaining more relevance all the time.

From: http://www.infosecblog.org/2010/08/ssl-proxies/


Because it is open outbound from the firewall, many applications send their traffic across port 80 to avoid firewall issues. This has led to port 80 being called the Firewall Traversal Exploit. Port 443 then is the Secure Firewall Traversal Exploit because it allows traffic out in an encrypted fashion.

Because its encrypted users bypass protections in place for HTTP to download viruses, access forbidden sites and leak confidential information. This is limited only by the availability of SSL sites. In recent years webmail like GMail has gone to full SSL sessions. Bad guys can easily set up SSL as well. Without a SSL proxy, all you can do to address these concerns is block by IP address. IP addresses change frequently and are less likely to be categorized in a URL block list.

When you use a SSL proxy, the web traffic is terminated at the proxy server and a new request is made to the remote server. The client browser uses a certificate from the proxy to secure data during the first leg of this transaction. This will result in a certificate error if you don’t deploy the proxy’s self-signed certificate as a trusted root. Because the client never sees the certificate of the remote server, the user does not get information about the trustworthiness of that certificate. For this reason it is necessary to either block all bad certificates or make sure your SSL proxy can pass on that certificate info when the certificate is expired or does not chain to a trusted root.

The SSL proxy can use the hostname (CN) in the server certificate to make a URL categorization decision to intercept or tunnel the traffic.

Because you can intercept based on URL categorization, you could choose to intercept (and block) only websites that are in your blocked categories. This is the simplest implementation of a SSL proxy. It blocks site that wouldn’t have been blocked before and it doesn’t interfere with anything else. If a computer doesn’t have your certificate in their trusted root, it’s not that bad because the site would have been blocked anyway.

A slightly more intrusive step is to also intercept webmail sites. Webmail sites have the potential to download malware although the site itself is valid. By intercepting the site the download is scanned by the antivirus layer. A related idea is intercepting all uncategorized sites so they can be scanned.

A full implementation involves intercept everything not categorized as a financial site. It is not recommended to intercept financial websites for obvious reasons.
Intercepting everything allows you to scan all downloads for viruses. The main drawback is you’ll have more issues with web applications not conforming to HTTP standards.

I think the simplest option of only intercepting websites classified in categories on your block list is best. It provides additional security without potential for complications. You’d have to make a security decision for your own environment.

There are security considerations to intercepting traffic. When you only intercept a site to block it you don’t have sensitive data but as you intercept other categories, you must take care. Sensitive data may now be exposed in clear text. You may want to think twice about what you are logging and caching. If any offbox analysis is performed you need to encrypt the connection and make sure nothing is on the remote box.

A lot of attacks occur over the web and its important to provide the best defense. It’s no longer good enough to ignore 443/TCP.

Tuesday, August 17, 2010

Five billionth device about to plug into Internet

From: http://www.networkworld.com/news/2010/081610-5billion-devices-internet.html?source=NWWNLE_nlt_daily_am_2010-08-17

Sometime this month, the 5 billionth device will plug into the Internet. And in 10 years, that number will grow by more than a factor of four, according to IMS Research, which tracks the installed base of equipment that can access the Internet.

On the surface, this second tidal wave of growth will be driven by cell phones and new classes of consumer electronics, according to an IMS statement. But an even bigger driver will be largely invisible: machine-to-machine communications in various kinds of smart grids for energy management, surveillance and public safety, traffic and parking control, and sensor networks.

Earlier this year, Cisco forecast equally steep growth rates in personal devices and overall Internet traffic. [See "Global IP traffic to increase fivefold by 2013, Cisco predicts"]

Today, there are over 1 billion computers that regularly connect to the Internet. That class of devices, including PCs and laptops and their associated networking gear, continues to grow.

But cellular devices, such as Internet-connected smartphones, have outstripped that total and are growing at a much faster rate. Then add in tablets, eBook readers, Internet TVs, cameras, digital picture frames, and a host of other networked consumer electronics devices, and the IMS forecast of 22 billion Internet devices by 2010 doesn’t seem farfetched.

Slate Wars: 15 Tablets That Could Rival Apple's iPad

The research firm projects that in 10 years, there will be 6 billion cell phones, most of them with Internet connectivity. An estimated 2.5 billion televisions today will largely be replaced by TV sets that are Internet capable, either directly or through a set-top box. More and more of the world’s one billion automobiles will be replaced by newer models with integrated Internet access.

Yet, the greatest growth potential is in machine-to-machine, according to IMS President Ian Weightman. Research firm Gartner named machine-to-machine communications one of the top 10 mobile technologies to watch in 2010. And almost exactly one year ago, Qualcomm and Verizon created a joint-venture company specifically to support machine-to-machine wireless services.

"This has the potential to go way beyond industrial applications to encompass [such applications as] increasingly sophisticated smart grids, networked security cameras and sensors, connected home appliances and HVAC equipment, and ITS infrastructure for traffic and parking management," Weightman said in a statement.

Monday, August 16, 2010

IPV6 Proxy

Mention IPv6, and I believe most people will know what you are referring to. But that's all, they basically will be limited to a general recognition of what you're talking about. Starting with the mid 90s of the last century after the birth of IPv6, the discussion on the topic of IPv6 has been hot, but amongst the majority of users, there are few who can really use IPv6 applications!

Where does the problem lie? On the one hand, the application and deployment of IPv6 itself is small; and on the other, or even more crucial is the interoperability between traditional IPv4 applications and the IPv6 network, which make IPv6 networks and applications basically their own little islands: the traditional IPv4 users do not have access, and currently the development of new IPv6 networks is not widespread.

The root cause of this situation, was in fact, an incompatible IPv6 protocol with existing IPv4 technology. 51CTO.com reported previously, the Internet Engineering Task Force (IETF) admitted that they committed a fatal error in the IPv6 standards, in not providing in the existing Internet protocol a way to have IPv4 backward compatibility. An IPv6 senior architect from Blue Coat in the United States, Qing Li, a security expert, in an interview, said: "The lack of real applications for IPv6-oriented solutions, forces enterprises to consider in the end either the use of an appropriate relocation program [to IPv6 networks] or to conduct a comprehensive upgrade. There are huge and comprehensive upgrade costs, enough to make most companies look and stop."

In other words, the current key issue is how to solve getting users from IPv4 to IPv6, basically a transition and convergence between the two. Mr Li said: "To smooth the shift to IPv6, requires a safe migration of business applications and services strategy." Well, is there a solution to this problem?

The answer is yes. It is called an IPv6 Proxy.

The IPv6 proxy is a proxy between IPv4 and IPv6 networks, allowing for transition and conversion with the use of a single piece of equipment. Mr Li explained that the intelligence behind the IPv6 proxy, is that it allows the the user access between networks, without the need for an address translation, administrators do not have to rewrite applications or upgrade IT Infrastructure, and IPv6 applications can be accessed with IPv4 networks today. Services and data in both the IPv4 and IPv6 environments can now interact smoothly today, and have an an easy migration tomorrow.

In other words, the IPv6 proxy acts as the client and server, regardless of whether the client is IPv4 or IPv6, so that an IPv6 client agent without special equipment can communicate to an IPv4 server. And similarly, when a traditional IPv4 client communicating with an IPv6 server application sends a request, the IPv4 to IPv6 proxy will be able to intercept the request, and then convert the request to an IPv6 request to the server; when the server returns the information, it also goes through the IPv6 proxy, eventually returning to the client.

How does this conversion work? Mr Li explained that on the Blue Coat IPv6 proxy, the TCP protocol is used in the suspension and re-packaging of packets. As we all know, TCP protocol works in the fourth layer on top of the IP protocol negotiation, while other applications are built on TCP or UDP protocols. The Blue Coat IPv6 proxy issues the TCP receive client request to the one side, and then analyzes the application layer protocol request, and then accesses security policies that meet the request of the new negotiation with the packet, and issues appropriate requests to the server, while clients also receive a normal response.

In the client view, the issue of requests and responses are normal as expected (either IPv4 or IPv6), without any address translation or other intervention on the client, so everything is transparent. And on the server, the client appears to send a request that meets and conforms with the IPv6 protocol itself. For existing enterprise IPv4 applications, the IPv6 proxy device can issue IPv4 requests from IPv6 clients, allowing for a more secure IPv6 backbone network, and allow IPv4 applications the use of IPv6 applications and services as well.

With the IPv6 proxy the challenge of IPv4 and IPv6 interaction appears to be solved. But a company may ask: with conversion between the two protocols, are there any safety concerns? Moreover, with the deployment of such a device, will the network transmission speed and quality be affected? Will a proxy affect existing applications? Will it greatly increase the network cost?

Mr Li explained that on the IPv6 agent equipment and network, companies may be faced with a challenge. He sums them up into four areas, that need to be examined using the initials "SUVA".

First, companies have the issue of content security control (Security), that is, how to use IPv6 and the proxy to ensure that enterprise applications meet the business management system and compliance needs, and also eliminate the need for re-certification of existing security policy.

Followed by availability (Usability), making sure the product application is convenient and reliable, and the original applications look transparent and convenient.

Third is the application of visibility (Visibility), which is very high in terms of IPv6 proxy requests. The proxy with all the applications through it require visual management to help network administrators with the application flow and a comprehensive understanding of Web content and control. Achieving effective application performance monitoring and adjusting network resources is key.

Finally, application layer protocol, where a business needs to accelerate applications (Acceleration). Acting as an intelligent device, IPv6 proxy should also provide acceleration capability so the network and application capabilities provide the best experience for end-users.

It should be said that these challenges also contributed to the development of the IPv6 proxy, an important reason for the relative difficulty in bringing the product to market. Mr Li explained that, Blue Coat's first-generation IPv6 proxy required R & D to spend five years developing the product. The key point of their success in the intelligent IPv6 proxy access to client requests, is not just a simple address translation, but a number of management issues, including the analysis of applications, auditing and security management, and content caching strategies such as acceleration. In the end, for network access, only if the functionality of a device meets the enterprise security management requirements, will the proxy request will be issued, and optimized.

Editor's note: This article is translated from Chinese, and the grammar has been corrected, but we haven't taken the extensive time necessary to make this article flow and read like fluent English.

Tuesday, August 10, 2010

Top Ten Web Malware Threats

From: http://www.esecurityplanet.com/print.php/3897476

Websites that spread malware may be leveling off, but Web-borne malware encounters are still growing. According to a 2Q10 Global Threat Report published by Cisco, criminals are using search engine optimization and social engineering to become more efficient, luring more targeted victims to fewer URLs.

Using IronPort SenderBase, Cisco estimated that search engine queries lead to 74 percent of Web malware encounters in 1Q10. Fortunately, two-thirds of those encounters either did not deliver exploit code or were blocked. But that means 35 percent of Web-borne exploits are still reaching browsers, where they try to drop files, steal information, propagate themselves, or await further instructions.

Browser phishing filters, anti-malware engines, and up-to-date patches can play a huge role in defeating malware reaching the desktop. However, to find unguarded vectors and unpatched vulnerabilities, let's look at how today's most prevalent Web malware works.

#10: Last on Cisco's list of 2Q10 encounters is Backdoor.TDSSConf.A. This Trojan belongs to the TDSS family of kernel-mode rootkits, TDSS files are dropped by another Trojan (see Alureon, below). Once installed, TDSS conceals associated files and keys and disables anti-virus programs by using rootkit tactics. Removing TDSS from a PC is difficult; using up-to-date anti-malware to block the file drop is a better bet.

#9: Ninth place goes to an oldie but goodie, Mal/Iframe-F. Many variants use this popular technique: inserting an invisible HTML iframe tag into an otherwise legitimate Web page to surreptitiously redirect visitors to other Websites. Hidden iframes may elude detection by the human eye, but Web content scanners can spot them and Web URL filters can block redirects to blacklisted sites.

#8: In a dead heat with Iframe-F is JS.Redirector.BD, a JavaScript Trojan that also redirects users to Websites they had not intended to visit. Like some other members of the large JS.Redirector family, this Trojan tries to evade blacklist filters by using obfuscation techniques like dynamically-generated target URLs.

#7: Nosing past Redirector.BD is Backdoor.Win32.Alureon. Alureon refers to a family of dynamic, multi-faceted Trojans intended to generate revenue from a victim's Web activities. Malware components within each instance vary, but Alureon has been seen to alter DNS settings, hijack search requests, display malicious ads, intercept confidential data, download arbitrary files, and corrupt disk drivers. In fact, threat reports indicate that Alureon has been used to drop TDSS onto infected PCs.

#6: Tied for middle-of-the-pack is Worm.Win32.VBNA.b. VBNA implants itself in a user's Documents and Settings folder, adding a Run key to the registry. Thereafter, VBNA auto-launches and propagates itself to neighboring PCs via writable fileshares. VBNA also displays a fake virus infection warning to trick users into purchasing fake anti-malware (which is often just more malware). Scare tactics like this appear to be on the rise, preying upon uninformed users.

#5: Next up is JS.Redirector.AT, another member of this Trojan family famous for redirecting users to other Web sites. Destination sites reportedly have displayed porn, phished for confidential data, and implanted malware on the victim's PC. One way to inhibit these Trojans is to disable JavaScript execution – if not in the browser, then in Acrobat Reader to block JavaScript hidden in PDFs. Exploits targeting Adobe PDF, Flash, and Sun Java vulnerabilities were particularly hot in 1H10.

#4: Taking fourth place is Mal/GIFIframe-A, a sibling to the afore-mentioned Iframe-F. GIFIframe-A also uses iframe tags, but this family of malware exploits iframes that have been injected into files encoded using popular graphic formats like GIF and JPG. When a user visits an infected Website and attempts to load the graphic, the injected iframe is processed, executing attacker-supplied code.

#3: At third, representing three percent of 2Q10 encounters, is a keylogger called PSW.Win32.Infostealer.bnkb. Dozens of Infostealer variant Trojans exist, targeting a wide variety of institutions and their customers. All work by capturing keystrokes, scanning for specific Web transactions, and stealing usernames, passwords, account numbers – typically those associated with online banking.

#2: A new JS.Redirector variant took second place in 2Q10: JS.Redirector.cq. Like other family members, this Trojan uses malicious JavaScript to redirect users. In this case, users find themselves at Websites that pretend to scan for viruses, then download fake anti-virus code, no matter where the user clicks on the displayed window. But how do legitimate Websites get infected with JS.Redirector in the first place? One reportedly common vector: SQL injection.

#1: First place goes to the now infamous Trojan downloader Exploit.JS.Gumblar. According the Cisco, Gumblar represented 5 percent of all Web malware in 2Q10, down from 11 percent in 1Q10. Gumblar is a downloader that drops an encrypted file onto the victim's system. Gumblar runs that executable without user consent, injecting JavaScript into HTML pages to be returned by a Web server or displayed by a user's Web browser. The injected JavaScript usually contains an obfuscated exploit; early scripts downloaded more malware from gumblar.cn – thus giving this Trojan its name.

Cisco's 2Q10 list was generated by IronPort, which uses Sophos, Webroot, and McAfee malware detection engines. Other vendors use different naming conventions and publish slightly different lists that represent other monitored data sources. And next quarter there will be new lists -- probably composed largely of variants.

The purpose of such lists is not therefore to tell you which malwares to scan for. That job falls to continuously-updated anti-malware defenses, installed on desktops, servers, and gateways. Instead, use this list and others like it to identify and proactively fight trends that are likely to persist or grow and target your Web servers and users tomorrow.

Wednesday, August 4, 2010

The 2010 Verizon Data Breach Report Is Out

From: http://isc.sans.edu/diary.html?storyid=9283

This year's data breach report continues this valuable narrative. This years report is based on a larger case sample than in previous years, thanks to a partnership with the United States Secret Service, who contributed information on a few hundred of their cases this year. Many of the findings echo those of previous years (excerpts below).


Who is behind Data Breaches?
70% resulted from external agents
48% caused by insiders
11% implicated business partners
27% involved multiple parties

How do breaches occur?
48% involved privilege misuse
40% resulted from hacking
38% utilized malware
28% involved social tactics
15% comprised physical attacks

What commonalities exist? (this was the interesting section for me)
98% of all data breached came from servers
85% of attacks were not considered highly difficult
61% were discovered by a third party
86% of victims had evidence of the breach in their log files
96% of breaches were avoidable through simple or intermediate controls
79% of victims subject to PCI DSS had not achieved compliance

Come on! Not only don't folks seem to be implementing some basic protections, but when they're told that they've been compromised (in their log files), no-one is listening! I guess this isn't much different than in previous years, but it'd be nice to see a positive trend here.

I'm not sure that I believe the low numbers for government data breaches (4%). I guess the report can only summarize data from cases that are "seen" by the incident handlers.

Find the full report here ==> http://www.verizonbusiness.com/resources/reports/rp_2010-data-breach-report_en_xg.pdf

Take a few minutes to read it over coffee this morning - I found it a good read, and just about the right length for that first cup !

Tuesday, August 3, 2010

Companies slow to create social media rules

From: http://www.detnews.com/article/20100802/BIZ04/8020349/1001/Companies-slow-to-create-social-media-rules


Companies beware: Employees aren't the only ones who should worry about a social media backlash.

Studies show that creating social media policies for employees helps companies prevent problems, but most firms would rather ignore the issue.

"I think it's hard to avoid," said Dean Pacific, a labor employment lawyer for Warner Norcross & Judd in Grand Rapids.

"I can't imagine telling anyone, 'Just completely stay away from social media. Pretend like it's not there.'"

But that's exactly what many companies are doing.

A Manpower survey of 34,400 companies worldwide found that 20 percent have a social media policy. Experts say ignoring the problem will only make it worse.

Instead, Pacific encourages companies to craft a social media policy that applies to both employees and management.

"We're still as a society trying to figure out what the limits and the boundaries are," said Michael Fertik, CEO and founder of ReputationDefender, a worldwide online reputation management and privacy company based in Redwood City, Calif.

"I don't think it's established yet because the technology's moving a lot faster than the law is."

Most companies with a social media policy reserve the right to monitor employee activity on work computers. Nothing workers write -- on e-mail, Facebook, Twitter or any other social network -- is private.

Some companies take it a step further by blocking social media sites on work computers.

ScanSafe, an online security company that provides a website blocking service to thousands of global corporations, found that 76 percent of its clients block social media sites. These companies place a greater focus on hiding sites like Facebook and YouTube than online shopping, weapons and alcohol.

"They view it similar to pornographic or gambling or shopping sites," Pacific said. "The big concern is that they're time wasters -- they're productivity busters."

Ford Motor Co. is among the businesses that realizes it can't prevent employees from using social media. So, the Dearborn automaker asks workers to explain that the views expressed are their personal opinions -- not those of the company.

"We're not authorizing an employee to be a spokesperson," said Scott Monty, who manages Ford's social media programs. "We want to make it clear that if they choose, they can talk about Ford, but they have to do so from a personal perspective."

Most companies also forbid employees from sharing trade secrets.

Pacific said many leaks are inadvertent. Employees who leave a company and are looking for new jobs might mention on their LinkedIn resumes that they were responsible for "$190 million in sales." Others who are still with the company might use Facebook to tout a cool product they are developing.

"It may not be bad intentions at all -- and it probably isn't -- but it can lead to harm to the company if this confidential information gets out there," Pacific said.

Other times the information that gets leaked isn't true at all. "But it still hurts the reputation of the company, because it looks real," Fertik said.

One of the biggest problems companies face are privacy violations. A growing number of employers check sites like Facebook and MySpace to screen job candidates, but experts say companies should be careful not to invade their privacy.

"Just on the very first page, you're going to see their sex, date of birth, marital status and their religious views," Pacific said. Asking about a job candidate's age, marital status and religious views in an employment interview is illegal.

Bosses who spy on employees by sneaking onto their private profiles and digging through personal information could jeopardize the company's finances. And bitter candidates who discover they didn't get a job because of something on their profile might file a discrimination lawsuit.

So Pacific recommends companies take precautions by having one employee collect information from the site and another review the information.

Monday, August 2, 2010

The Rising Role of Compliance Social Computing

From: http://www.networkworld.com/community/node/64427



When it comes to implementing a social computing strategy, many companies still abide by the security axiom of “only allow what is specifically permitted.” In fact, according to our latest research, 40 percent of (all?) companies block all access to public social networking sites such as Facebook, Twitter, YouTube, or MySpace. These companies tend to have conservative views toward technology, and consider the risk of allowing access too great—often from both security and productivity standpoints.

Even many of the 60 percent of companies that now allow access restrict it in some way, either by time of day, approved groups (e.g. marketing, customer service, sales), or for specific use cases. We often hear from IT leaders that initial efforts to block access entirely were thwarted by legitimate business needs for the organization to participate in public social networks. In some cases, the path is reactive; access is wide open until someone in legal and/or compliance functions becomes aware of this usage. Often then the reaction is a knee-jerk, “full stop” for any social-computing activities.

There are four primary areas involved in addressing security and compliance concerns related to social computing:

• Breach. Forty-six states, the District of Columbia, Puerto Rico, and the Virgin Islands have adopted legislation requiring notification of security breaches involving personal information. In addition, federal regulations, such as the Health Information Technology for Economic and Clinical Health, contain breach notification clauses.

• Attack. Compliance with legislation and regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) and the Payment Card Industry Data Security Standard (PCI-DSS), mandate organizations implement security best practices to prevent access to sensitive data.

• Unacceptable use. Typically, this is an internal policy that defines the acceptable use of corporate assets. Employees sign off on the policy and its ramifications, often as part of an employment contract.

• Accountability. Specific compliance requirements, such as Financial Services Regulatory Authority (FINRA) 10-6, require logging of trader communications. This may also include an internal compliance requirement to monitor and log communications with a high likelihood of becoming part of litigation and e-discovery.

One of the greatest concerns about social networking is the risk of a sensitive data breach. Social-computing applications are potential conduits to breach sensitive information—specifically, personally identifiable information and protected health information . Examples include a real-time breach in which sensitive information transfers in the clear during, for instance, an IM chat. Another example is a non-real-time breach in which a file transfers (intentionally or unintentionally) as part of a Skype session, for instance. Remember, the breach is the disclosure, not the exploitation of the data.

In addition to the breach challenge itself, there is also the challenge of tracking a breach. Without proper audit controls on all social-computing communications, the organization most likely will be totally unaware of any sensitive data breach. There is an additional concern that even when a company knows of a breach, they may have little recourse to limit its exposure. For example, a team within a manufacturing company sets up a Facebook group to facilitate collaboration on the design of its newest, and revolutionary, manufacturing process. There is an assumption of privacy since the group is closed with the exception of the team’s access. However, all it takes is one user without properly set privacy settings whose account is compromised to enable leakage of the group’s information. Users may not go through the due diligence required to determine data ownership for materials placed on public social sites.

Sites such as Facebook and LinkedIn are primary targets for cyber criminals. The primary attack vector is sending a legitimate looking link—from a “friend”—that takes the user to an infected Web page. The user opens the Web page, clicks on a link and inadvertently downloads malware, exposing the enterprise to significant risk. As discussed, blocking access to these sites is not absolute so the only option in this case is to block access to bad URLs.

Security practitioners who participate in Nemertes’ research indicate this attack vector is becoming one of their greatest challenges. In addition, recent reports indicate millions of compromised or fake Facebook accounts are available for sale. Protecting against these attacks requires, at a minimum, a Web-content-aware firewall with granular filtering to dynamically block access to specific Web pages, and even specific areas of an individual Web page. Companies that provide solutions in this area include Blue Coat, FaceTime Communications in partnership with Blue Coat, Palo Alto Networks, Socialware, Trend Micro, Webroot, WebSense, and Zscaler.

As more companies shift from “block everything” to “block some things,” the need for a proactive social security and compliance strategy will continue to gain importance.