Welcome to the Proxy Update, your source of news and information on Proxies and their role in network security.

Wednesday, December 15, 2010

Is the Cloud the Future of Web Proxy?

A quick visit to Ironport's website could make you wonder if they sell web gateway appliances anymore. The website focuses on DLP, their mid-year security report, and the cloud. In addition a lot of Secure Web Gateway manufacturers have acquired or announced intentions to start their own cloud services. Which could easily lead one to wonder if secure web gateways and proxies have a limited shelf life, and will soon be replaced by the cloud. The cloud has some obvious benefits. The first is of course, no hardware cost, so no capital expenditures. Everything is an operating expenditure, and you pay for the services on a monthly basis. This is great in a down economy when you're trying to cut capital expenditures.

But the question here is, whether the total costs over the life of the service exceed those of a hardware based solution, and what's the break-even point, at which point hardware would be cheaper? In addition what happens, when the economy picks up and the company wants to make capital expenditures? It may make it look like a cloud is preferable today, but it may not be tomorrow.

In addition, there's the usual complaints about a service offering, including the inability to schedule maintenance windows. With a service you're bound by the provider's operating windows, and you need to ensure uptime with a good SLA in place with the provider.

Finally a service offering has to appeal to all levels of organizations, from the smallest to the largest, as any size company could be their customer. That means the interfaces and the mechanisms for establishing policy for the secure web gateway is going to one that's easy to use, and has the widest appeal. While this probably works for small organizations, it's likely the policy engine will not be sophisticated enough to handle most large organization's needs around not only acceptable use policy across the organization, but the differences that may be necessary from department to department within the organization.

For these various reasons, it's probably not likely the cloud will take the place of the secure web gateway. Instead both will probably be offered for the foreseeable future, and each has its place, depending on the size and complexity of the organization.

Tuesday, December 7, 2010

The Move from Acceptable Use Policy to Protecting the Innocent

Web filtering really got its start as way to implement Acceptable Use Policy (AUP) in organizations that wanted to make sure their employees were spending their time on the Internet at websites that met corporate acceptable use guidelines. With the growth of the web and the spread of malware from email to websites, the focus for web filtering has really moved from implementing AUP to protecting the casual web user from malware and drive-by downloads they might get from good or bad sites.

The malware isn't exactly new, as much of what's prevalent today depends on techniques that have been in effect for years, but rather the subtlety with which they are released has changed. Rather than an anonymous email asking you to watch their video, it's a close friend's hijacked Facebook account that sends you a message asking you to watch their kid's latest accomplishment video. Click on the video and of course you'll be prompted to update your video codec, which actually downloads malware onto your computer.

An unsuspecting user will naturally trust the person they know rather than the one they don't, making the hijacked Facebook account much more malicious than a spam email asking you to watch some sexy video.

So with this evolution to targeted attacks, protecting the everyday user from malware and drive-by downloads is increasingly important for organizations, and the role the secure web gateway plays in the organization. That's why it's more important than ever to make sure your web filtering software and subscriptions are up to date, and using an accompanying anti-malware program that scans everything. Reputation based exceptions don't really work anymore, since even reputable sites can get hacked and host malware links.

Tuesday, November 16, 2010

Facebook adds Email

If you've been watching the news this week, it was unavoidable. You inevitably saw the announcement from Facebook that they are rolling out email services to their user base, making them the largest email provider in the world. Facebook has long been a thorn in the side of security administrators who manage secure web gateways and proxies. Most companies didn't want their employees visiting social networking sites and spending all their times on them. Times have changed, and even the US military has changed its stance on Facebook, realizing it's an important tool in keeping the troops happy. So like companies that realize Facebook is an important marketing tool, the U.S. military has to find the right balance between allowing access and making sure employees don't get carried away playing games or using other Facebook applications all day.

Having email in Facebook, just adds one more distraction, and provides one additional page to block if your organization's policy already prohibits external access to e-mail. The good news for most security and IT administrators is that modern URL filters and web protection already offer mechanisms to allow basic Facebook access, but prevent access to specific pages and applications through the use of multiple categories. Allowing the category of "social networking", but blocking "games", "alcohol", "pornography", and even "webmail" will block things like Farmville, drinking games, Playboy's Facebook page, and eventually Facebook's email application, since these are generally categorized as both social networking and the appropriate other category they fall into.

Thursday, November 11, 2010

The Super Long URL

Blue Coat's Security Lab's latest post is about what for most people should be an obviously bad URL:

online.citibank.com.us.jps.portal.index.do.signin.logon.citibank.online.secure.sessionid.udp pincyyadcjfwjkgporvazebpnejlinbnunptl.qtpycihnqzaepbbwdrgjysgkvvegkvrztfytnffb.cg gshinmxvtsmxeesikaeciwhyqscvtfbcxjklti.sid.afterthehunttaxidermy.com/


If you actually saw a URL that looked like the one above you should be immediately suspicious that it's part of an attempt at phishing.

But in actuality of course most people don't see the URL above, they see the HTML facade that's created for the email or webpage, and the above is just what's linked to the HTML display. But wait, you're thinking most browsers will show you where the HTML actually links to, and I'm smart enough to check that out (either in the bubble that shows up in the browser or the full link in the status bar at the bottom of the page).

But what's interesting about a URL like the one above is that it's so long that the entire URL won't display in most cases, so you only see the front part of the URL in your bubble or status bar. And that's the most likely explanation behind why the hacker created the URL. If you're not careful to check out the entire URL, you'll only see the front, and it may be enough to convince some people it's a legitimate link.

So be careful, and check the full URL of where you're going on the web, or at the very least make sure you're browsing through a Secure Web Gateway or proxy device that's configured to block phishing sites.

Tuesday, November 2, 2010

Malware hiding in plain sight

It used to be that malware was hosted on the domains that were typically hidden from the average user, hosted in other countries. For example, for a long time malware was most prevalent on ".cm" and ".cn" domains (Cameroon and China respectively). A new report from McAfee shows that malware is now fully entrenched in the ".com" domain. In their latest study ".com" took over ".cm" to be the top domain hosting malware. 31.3% of all sites hosted on a ".com" domain are considered risky. The ".info" domain came in second with 30.7% sites rated risky. ".vn" (Vietnam) came in third at 29.4% and ".cm" fell to fourth to 22.2%.

This new study just confirms what we already know. Hackers and providers of malware are just getting bolder, and that there's more threats out there. It's more important than ever now to make sure your organization is protected when browsing the web using an up to date proxy or secure web gateway.

Monday, November 1, 2010

Appliance, Cloud, or Software

The age old question of whether to buy an appliance or build out hardware yourself and buy software to run on your own general purpose operating system, has been getting serious competition from the cloud, or SaaS (Software as a Service). IT admins now have 3 choices when selecting how to implement web security for their organization. The question is how do you choose which is right for your organization. The key here is that the right answer isn't the same for everyone.

There's an obvious difference between the previous choices of appliance or build your own versus a cloud solution, and that's based in the accounting, which may not be a key criteria for an IT admin, but is certainly a consideration for your finance group. An appliance or build your own has capex ramifications, and of course a cloud solution is limited to opex costs. If your finance arm rules your expenditures you may not get a choice when it's time to upgrade your proxy or secure web gateway.

But for those of you that do have a choice, it may have to do with how much security expertise you have on hand, how much control you need over your maintenance windows, and how many of your users are remote and travel extensively. Each of these will affect which solution you choose, and may even cause you to consider a hybrid of two solutions. If you happen to have extensive expertise, build your own may be the way to go, especially if you need an extremely custom solution.

For those that need ease of use, and quick deployments, an appliance or cloud makes more sense. Those that need control of their maintenance windows should of course avoid a cloud where they will be bound by the service providers maintenance windows. And those with lots of remote users or users who travel extensively, may want the cloud solution to cover those users when they aren't behind the proxy in the data center. And when you have a mix of these requirements you may want to have more than one solution in place. For example, you may want an appliance in your data center and a cloud solution for your remote and traveling users. In the end, it may turn out for most organizations a hybrid solution makes the most sense.

Wednesday, October 27, 2010

Multiple Category Ratings

While a blended defense remains critical for the Secure Web Gateway, today we're going to focus on the URL rating technology used by most proxies. It's one component of the many defenses offered by proxy vendors, and probably the basis for most proxy vendor's security solution. The reason for this is that a URL database of category ratings can be stored on-box, offering quick access to a rating for a specific web site. (When a URL isn't in the on-box database, the vendor has to go to a real-time rating system and an anti-malware scanning engine, both of which can cause increased latency or delays for loading a web page).

With today's increasingly complex web pages, it's getting harder to rate a web page into a single category, so it's important to for the secure web gateway to recognize multiple categories for a single URL. A great example of this is Facebook. While a base Facebook URL (www.facebook.com) is recognized as a Social Networking site, pages within Facebook may need to be categorized as both Social Networking and a second, third or even four different category. For example, the many games available in Facebook, such as Farmville and Mafia Wars, should be rated as both Social Networking as well as Games. A dual rating can help an IT administrator allow Social Networking category while blocking Games to prevent wasted time at work, and allow the use of Social Media in promoting a company's products and services.

Unfortunately not all proxy vendors have this capability to support Web 2.0 sites, so be sure to check with your vendor to make sure they offer this important basic tool to help secure your web access.

Monday, October 25, 2010

Is Facebook Really a Threat?

In interesting news today, Palo Alto reported that in corporate users who have access to Facebook, 88 percent of those users are "lurkers", meaning they only watch what's going on and what their friends are posting, as opposed to posting stuff themselves, or playing games on Facebook. In fact the study found that only 5 percent of users played games and only 1.4 percent actively posted updates or comments on Facebook while at work.

It's an interesting observation that we are voyeurs, but one that may be an interesting one for companies that plan on blocking or already block Facebook. The real risks that come from Facebook are loss of productivity, and the possibility of clicking on malware. If users are really just voyeurs and not actively posting their own updates, there may be some loss of productivity, but probably not as much as employers may fear. The threat of malware should be alleviated by making sure their secure web gateway (proxy) has the latest anti-malware and URL filtering software.

The other alternative is of course to allow Facebook browsing, but only during off hours, and placing a coaching page, to alert the end-user that they should be doing so only on their own time during breaks, for example.

Wednesday, October 20, 2010

The None Category

I had an interesting weekend discussion with an end-user of web filtering products, whose company takes the approach that anything that ends up in the category "None" needs to be blocked. Unfortunately for end-users that means new websites and websites that aren't visited frequently are ones most likely to have that rating. The end-user also mentioned to me frequently it prevented him from finishing his project or job he was working on, leading to a ticket submission into the IT helpdesk to get him access for those sites, a process that usually took at least a week. A week where he was unproductive.

It got me thinking about the category none, and what you can do about it as an IT administrator. There are two obvious choices, first, block it like this company did, causing a reduction in productivity, when websites that are required for work end up in this category, or the opposite, which is to allow "None" and risk allowing malware, phishing attacks, and prohibited websites from making it into the corporate network.

There is a third option of course, which is one that it seems not enough companies take advantage of, and that's the coaching page. When a website turns up with a rating of "None", instead of blocking it or allowing the page, throw up a coaching page, that explains the website was found to have no rating, and as such could be a dangerous site, with a new threat that hasn't yet been discovered. Allow the user to click through if they are certain the website should pose no risk to them and they agree they have a business purpose for visiting that website, and at the same time alert the end-user that their identity and the fact they visited the site was being recorded for accountability.

With a coaching page, most users who really have no business going to the new site will be wary of visiting the site, and only go if they have a business purpose for visiting the site. It should unload much of the hassle of creating custom exception lists for end-users when their requests get blocked, and leave accountability in the hands of the end-user. Make sure of course your proxy can create an exception page and will indeed log the user's identity and the site they attempt to visit.

Wednesday, October 13, 2010

Spyware Effects

In reviewing the categories that Blue Coat offers in its URL database, one of the categories looked to be unique to Blue Coat, and has a name that might be a little confusing if you don't know what it means. Blue Coat has a category called "Spyware Effects/Privacy Concerns". It's not triggered as you might imagine when an end-user tries to go to a site that contains spyware or malware (instead "Spyware/Malware Sources" is used for that)

"Spyware Effects" refers to when a workstation or PC attempts to go to a site that is known for collecting personal and private information or a site known for sending out instructions to a botnet. The purpose of this category is to alert the IT admin to the possible existence of infected workstations or PCs that have spyware, malware and may have been compromised and are now part of a botnet/zombie net.

It also allows the IT admin to set policy to prevent the compromised PC or workstation from sending out possibly private or confidential information out of the network, as well as preventing the PC or workstation from performing possibly illegal operations.

This is a unique and valuable category and one I'm surprised that I've only found in one proxy vendor's offering.

Tuesday, October 12, 2010

Free Public Wifi

If you ever wondered about the "Free Public Wifi" SSID being broadcast in many public locations like airports, NPR recently took the time to explain the plethora of wifi hot spots sporting this name. As I'm sure you already suspected these aren't legitimate wifi hotspots, and are instead PC's that have been infected with a virus and are acting like zombies, broadcasting an "ad hoc" network (rather than an infrastructure network). If you were to connect to the network, you'd be connecting your computer directly to the infected PC (and not a wifi hotspot), and of course, infecting your PC in the process.

It turns out there's an easy solution for most PC's running Windows XP to prevent yourself becoming part of this zombie network. According to Microsoft, upgrading to Windows XP Service Pack 3 should solve the problem, and prevent you from being affected by this virus. So if you haven't done it yet, there's now a good reason to upgrade your PC.

Thursday, October 7, 2010

Cisco adds Web Filtering to VPN Client

In news that might have been hidden by their Security Appliance announcement, Cisco also announced that their VPN Client, now called "AnyConnect" is adding services from ScanSafe, their Cloud service that offers web filtering.

It's an interesting development and a timely one given yesterday's discussion on Mac vulnerability. It provides another avenue for those users who are browsing without the advantage of a Secure Web Gateway to get protection from malicious websites, malware and spyware.

If you're not using the Cisco VPN client, you can also still get this protection on a corporate basis using clients offered by web security companies like Blue Coat and Websense, both of whom offer clients for remote workers surfing the web from hotels and other web access points.

Wednesday, October 6, 2010

Macs are vulnerable to spyware too!

This morning on Facebook, my cousin posted that his Gmail account had gotten hacked (the IP traced to one in China), and that bogus emails were sent to everyone in his address book. The bogus emails included a link to a malicious website. I felt pretty confident in clicking on the link, since I was using Blue Coat's free web filtering program K9, and sure enough, it blocked me from getting to the URL, claiming the site was "Illegal/Questionable".

In the comments to my cousin's post on Facebook, I mentioned to him he should scan his computer for spyware, as that was likely the culprit that caused his Gmail account to get compromised. His response? "Impossible, I'm using a Mac". I think his response is a classic one that many Mac users give, when discussing spyware, malware, viruses and trojans. A basic, "it can't happen to me" attitude. Unfortunately, it can happen on Macs, and spyware, and even malware exists on Macs. Spyware is easier to implement since it can just be embedded into javascript on a website, and the browser makes you vulnerable.

Consider this post a friendly reminder, that just because you're using a Mac doesn't make you immune to spyware, malware and viruses. If you're not browsing behind a proxy that's protecting you with anti-malware and URL filtering, consider installing a free web filtering program like K9 (www.getk9.com).

Thursday, September 30, 2010

Web 2.0 Breaches Cost Businesses $1.1 Billion

Recently in the news, an article on how much Web 2.0 breaches are costing companies.

From: http://www.informationweek.com/news/storage/disaster_recovery/showArticle.jhtml?articleID=227500731&subSection=News


While conceding its value to corporate initiatives, many business professionals have voiced their concerns about security threats associated with Web 2.0. This concern is perhaps with good reason, since more than 60% of those surveyed reported losses associated with Web 2.0 averaging $2 million, a new McAfee-commissioned study found.

One main reason for these breaches, which collectively totaled $1.1 billion, was employee use of social media, according to the report, which was conducted by research firm Vanson Bourne and authored by faculty affiliated with the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University.

In their efforts to reduce Web 2.0-related risks, almost half the organizations surveyed block Facebook, and one-third restrict employee use of social media, the study said. One-quarter monitor use and 13% completely block all social media access, the McAfee study found.

Half of the 1,000 global decision makers polled said they were concerned about the security of Web 2.0 applications such as social media, microblogging, collaborative platforms, web mail, and content sharing tools. And 60% voiced concerns about the potential loss of reputation as a result of Web 2.0 misuse, found the report, "Web 2.0: A Complex Balancing Act -- The First Global Study on Web 2.0 Usage, Risks, and Best Practices."

"Web 2.0 technologies are impacting all aspects of the way businesses work," said George Kurtz, chief technology officer for McAfee, which Intel recently acquired. "As Web 2.0 technologies gain popularity, organizations are faced with a choice -- they can allow them to propagate unchecked, they can block them, or they can embrace them and the benefits they provide while managing them in a secure way."

In fact, more than 75% of businesses are using Web 2.0: About half of those surveyed use Web 2.0 applications for IT functions; about one-third have adopted these technologies for sales, marketing, or customer service; and 20% are using Web 2.0 apps for human resource or public relations. Three-quarters of respondents who use Web 2.0 believe the technology could create new revenue streams for their organizations, 40% to 45% of businesses said Web 2.0 improves customer service, and 40% said it enhances effective marketing.

Despite security challenges and concerns, about 33% of companies surveyed do not have a social media policy and almost 50% lack a policy for Web 2.0 use on mobile devices, the study found.

Of those that have addressed security worries, 79% increased firewall protection, 58% added greater levels of web filtering, and 53% implemented more web gateway protection since introducing Web 2.0 applications to their companies, according to the report. Forty percent of respondents budget specifically for Web 2.0 security solutions, the study said.

"The best protections are those that don't get in the way of getting work finished, because users are not tempted to circumvent those controls. As not all information needs to be protected in the same way, and not all users are going to interact with Web 2.0 technologies in the same manner, defenses should be tailored to fit the circumstances of use," said Eugene Spafford, founder and executive director of the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University.

Wednesday, September 29, 2010

News Sites, Searches May Be Riskier Than Porn

A few news articles came out today on a new study that shows you're not more than two clicks away from malware and that News sites and searches are riskier than porn.

Good reason to make sure your Secure Web Gateway's malware protection is updated, and you're using proactive layered defenses! Here's one of the articles:

From: http://www.informationweek.com/blog/main/archives/2010/09/news_sites_sear.html;jsessionid=BRHLKA15WRWBVQE1GHOSKH4ATMY32JVN

Steer clear of gambling, porn and other known risky sites and related searches and you and your employees -- and your business -- are safer, right? Not according to a new Websense study which found that leading news and pop culture sites, and hot-trend search terms may be more dangerous than some of the ones you're steering clear of.

If you and your employees stick to the most popular news, game, social network sites and message boards, you're still never more than two clicks away from malware, the Websense study reports.

In other words, when it comes to protecting yourself by proscribing your company's surfing and searching habits, you're damned if you don't, but you may also be damned if you do.

The cause is a combination of increased automation and thus ubiquity on the part of the malware community, and the increased use of partner sites and links -- often not previewed, obviously -- by legit sites.

According to Websense, no more than two clicks away from malware or other dangerous content are:

"More than 70 percent of top news and media sites
More than 70 percent of the top message boards and forums
More than 50 percent of social networking sites"

Here's a startling one: more than 60% of sites linking to games also contain links to toxic sites, while less than 25% of sex-related sites contain malicious links.

(Not that this is any reason to alter your policies related to objectionable material, of course.)

Search-poisoning is just as bad. Celebrity and other hot topics have always been malware-attractors, but less newsworthy searches are becoming riskier as well. Do a search for baby bedding in London, Websense found, and a full 30% of the results returned will be poisonous.

It's not exactly breaking news that spammers and malware creators are following hot trends and popular topics, zapping the zeitgeist as it were, with toxic links. But the Websense study shows just how pervasively the bad guys are going after you and your employees via your supposedly safe surfing and searching habits.

Whatever your company's policies are regarding employee Web usage, these finding are a good reminder to remind your employees that just because a link is on a reputable site, there's no guarantee that the link isn't compromised.

Even when they're surfing and searching safely, they have more reason than ever to be careful. To be, in fact, wary, and take one or two very deep breaths before clicking anything.

And certainly before that second click.

Tuesday, September 28, 2010

DLP in a Proxy World

DLP (Data Leakage Protection) seems to be gaining more steam in the last year. While DLP was relegated to those organizations that had requirements for DLP due to government compliance issues (like HIPAA, Sarbanes-Oxley, Graham-Leach-Bliley, and others), today many organizations are starting to look at DLP to prevent data theft, accidental data loss, and just the prevention of possibly embarrassing incidents.

It's impossible to implement DLP without bringing the proxy or Secure Web Gateway into the picture. That's because the proxy handles all the outbound web traffic in a typical network architecture. DLP relies on the proxy to determine what outbound traffic needs to be relayed to the DLP device for inspection to determine if the data is sensitive or if it's okay to be sent out of the organization. This conversation between the DLP device and the proxy occurs over the ICAP protocol discussed here. Unlike anti-malware which inspects inbound web traffic, DLP is primarily interested in outbound traffic, also known as request-mod in ICAP.

DLP of course isn't limited to the proxy and outbound web traffic. There's also outbound email traffic, IM traffic, other outbound network traffic and physical device security, typically implemented as a client on PCs and laptops. There's also Network Discovery to determine what and where sensitive information is stored on the network. Each organization is going to differ in which of these pieces of DLP is more important, but it's important to recognize that a complete DLP solution requires a bit of thought, and implementing and integrating with multiple existing services, including the web proxy.

Monday, September 27, 2010

Browse the Web Using Encryption

In case you missed it, this past May of 2010, Google rolled out the beta of SSL Search. At first they put it at https://www.google.com, but it quickly caused problems for schools and other organizations that were trying to enforce web browsing policies, so they created a separate website, https://encrypted.google.com and had https://www.google.com redirect to the new site. This allowed the school admins and other sites that weren't running an SSL proxy to just block https://encrypted.google.com.

According to Google, SSL Search is just beta for now, but it could move to the mainstream and even replace the basic search mechanism, except for the fact that most IT admins probably aren't ready for it. Meaning that encrypted search would probably break all their web browsing policies on their Secure Web Gateway or their proxy, because they haven't yet implemented an SSL proxy on their Secure Web Gateway.

The very fact that Google has introduced an SSL search should be a wake up call to any IT admin that is running a Secure Web Gateway with browsing policies that it's time to implement an SSL proxy (and associated malware protection that's necessary as we discussed in a previous article), otherwise the IT admin caught unaware is going to be letting users bypass their policies, and also let in malware through the SSL backdoor.

Friday, September 24, 2010

What You See Isn't Always What You Get

In any discussion about proxies or Secure Web Gateways, there's always a discussion about how effective and complete a vendor's URL categorization happens to be. This is important because an organization's policy enforcement, and the prevention of malware into the company is dependent on this categorization. It's not surprising then, that various vendors continually seek out ways to show up one another in the URL filtering realm, with missed URLs or incorrectly classified URLs.

It's hard not to be taken in when you're shown a popular URL and then told, by the way a particular vendor doesn't classify it correctly. Recently I was told that Blue Coat mis-categorized the URL, "http://www.facebook.com/playboy#!/playboy?ref=ts" as only Social Networking and missed the category Adult/Mature Content, but of course correctly identified "http://www.facebook.com/playboy" as both categories.

While it's true if you plug in just the URL "http://www.facebook.com/playboy#!/playboy?ref=ts" into a test for Blue Coat's URL categorization, you'd only get Social Networking, you actually have to dig a little deeper to see the truth behind this statement. If an end-user actually tried to visit this URL through a browser, that's not really the site they would visit, that's because when you go to this URL, you're actually visiting (courtesy of AJAX) "http://www.facebook.com/playboy?ref=search&__a=4&ajaxpipe=1&quickling[version]=293384%3B0", a URL that is categorized correctly (and blocked correctly if you have Adult/Mature Content blocked), even though the address bar will continue to show "http://www.facebook.com/playboy#!/playboy?ref=ts".

All this just goes to show, you need to take what one competitor says about another with a grain of salt and do your own testing to make sure the solution you pick fits your needs.

Wednesday, September 22, 2010

Country-coded Malware

From: http://www.bluecoat.com/blog/country-coded-malware

Late last week, we were tracking a spike in exploit server activity. The majority of traffic was being driven by compromised OpenX ad servers (sound familiar?)... This is most likely due to a critical security flaw in current and older versions of this software. (For details on the flaw, see here.)

An examination of the malicious JavaScript code injected by the compromised server shows that:

1. Cookies must be enabled for the browser to be relayed to the attack site. [Not too exciting. --C.L.]
2. If the user's language has a two-letter region code that is on a "safe" list, then the malicious iFrame that points to the attack site is NOT created. [But this is cool! --C.L.]

As the Bad Guys are normally indiscriminate in the selection of their victims, their decision to give some users a break merits further examination.

Language is often a key feature in tailoring an attack to potential victims. No sense showing a fake AV site in Russian to an English-speaker, or vice-versa. However, as this particular exploit server invisibly attempts to compromise the user's browser while they are busy looking at a legitimate site, language-tailoring does not seem to be the motivation in this case.

One variant of the conficker malware famously checked for a Ukrainian-language keyboard on the victim's computer, and refrained from infecting that system if it was found. The general presumption at the time was that they did this to keep the local police off their case -- it's always harder to catch and prosecute a computer criminal in another country. Again, that doesn't seem to be the case here, since the list is so large.

So we're open to suggestions!

Here's the list of "do not attack" countries:


ae UNITED ARAB EMIRATES
al ALBANIA
az AZERBAIJAN
ba BOSNIA AND HERZEGOVINA
be BELGIUM
bg BULGARIA
bo BOLIVIA
br BRAZIL
by BELARUS
ci COTE D'IVOIRE
cn CHINA
cr COSTA RICA
cz CZECH REPUBLIC
dk DENMARK
do DOMINICAN REPUBLIC
dz ALGERIA
ec ECUADOR
ee ESTONIA
eg EGYPT
ge GEORGIA
gf FRENCH GUIANA
gp GUADELOUPE
gr GREECE
gt GUATEMALA
hk HONG KONG
hr CROATIA
hu HUNGARY
id INDONESIA
il ISRAEL
iq IRAQ
ir IRAN
jo JORDAN
kw KUWAIT
lk SRI LANKA
lt LITHUANIA
lv LATVIA
ma MOROCCO
md MOLDOVA
mk MACEDONIA
mt MALTA
my MALAYSIA
om OMAN
pa PANAMA
pk PAKISTAN
pl POLAND
pr PUERTO RICO
ps PALESTINIAN TERRITORY
pt PORTUGAL
qa QATAR
re REUNION
ro ROMANIA
rs SERBIA
ru RUSSIAN FEDERATION
sa SAUDI ARABIA
si SLOVENIA
sk SLOVAKIA
sv EL SALVADOR
th THAILAND
tn TUNISIA
tr TURKEY
tt TRINIDAD AND TOBAGO
tw TAIWAN
ua UKRAINE
uy URUGUAY
vn VIET NAM

Tuesday, September 21, 2010

Do Acquisitions Help or Hurt the Proxy?

In the Secure Web Gateway space there seems to have been quite a bit of consolidation over the last few years. Ironport got acquired by Cisco, Secure Computing was acquired by McAfee who in turn is being acquired by Intel, and Finjan was acquired by Marshal8e6, now M86 Security. SaaS offerings of Secure Web Gateway offerings have also been moving in the same direction with the acquisition of ScanSafe by Cisco and MXLogic by McAfee.

It seems all the big players want to play in the Secure Web Gateway space. Only Websense and Blue Coat remain independent players in this market focusing specifically on Web Security. The good news about an acquisition is that the offering is part of a larger company with more resources, so it's less likely the company or the product will fail. The bad news about an acquisition is the product is now part of a larger offering and often times there's less focus and less knowledge about that specific product as the employees become more generalists that have to understand a broader range of products.

A great example of this is Ironport's acquisition by Cisco. For a while Cisco let Ironport continue on as a separate entity which was the best of both worlds for Ironport's customers. A dedicated sales and support team with all the backing of a giant company. But in the last year, Cisco moved sales to their general sales team a group responsible for all of Cisco's products, and more recently the Ironport support team was swallowed whole into Cisco's support infrastructure. Can this be good for the customer that only has Ironport, and uses some other networking vendor for their gear?

While Blue Coat and Websense may not have the giant size of Cisco, at least they still have the specialization and expertise to help out their customers specific issues associated with web gateways and proxies. Personally, I'll go for the specialization over the size of the company any day.

Monday, September 20, 2010

On Box or Off Box Anti-virus?

We've discussed the importance of anti-virus (anti-malware) scanning in other posts on this blog, so I won't go over that ground again, just suffice it to say you don't have enough protection if you aren't doing anti-malware scanning on your Secure Web Gateway. Today I'm going to tackle a slightly different question, and that's where the anti-virus scanner should go. There's two schools of thought on this one. Some vendors recommend running the anti-malware engine directly on the Secure Web Gateway, while other vendors recommend running a separate anti-malware box, using a protocol called ICAP to transfer data between the Secure Web Gateway and the anti-malware device.

The question is which one of these is right for your environment? The question really has to do with size. For smaller organizations where you have limited bandwidth to the internet and smaller numbers of users, having anti-malware on your Secure Web Gateway probably doesn't affect the performance of the box significantly, so running the anti-malware directly on box is probably the right answer in terms of performance, lower costs, and less use of rack space.

For larger organizations, with larger bandwidth requirements and large numbers of users that are taxing the Secure Web Gateway, you really want to keep the anti-malware separate. It has the added benefit of making sure your Secure Web Gateway is delivering web pages as quickly as possible to time-sensitive end-users. It may seem like there's an added cost due to having to purchase additional anti-malware systems, but in actuality you're probably buying the same amount or less boxes than having the anti-malware on-box. The performance drop by having the anti-malware on box would easily double or more your box requirements.

So if you're a larger organization, and response time for web pages is key due to the mission critical nature of your web applications, then remember, keeping the anti-malware off box is probably the right answer. If you're a smaller organization and aren't taxing your Secure Web Gateway, then you can probably run your anti-malware on box.

Friday, September 17, 2010

APT - Advanced Persistent Threat

One of the latest buzzwords in the security world is APT, also known as Advanced Persistent Threat. If you live in the Bay Area, and you've been listening to news reports, you've heard this buzzword quite a bit in the last couple of weeks in response to Senator Dianne Feinstein's announcement that Cyber threats are the number one issue for her. A number of commentators on Senator Feinstein's news, all industry veterans have brought up the topic of APT. It makes you wonder if this something new you should be worried about.

The truth is that APT doesn't refer to any new malware, trojan or virus. Instead it refers to the application of cybercrime and hacking to a specific targeted group. So consider it a fancy new way to talk about cyber threats that are targeted at individuals or groups of individuals, where the hacker has some knowledge about that person or group of people.

In relation to the web and web security, this could be a group of people targeted because they are all friends of one person, whose facebook account has been hacked, and they all receive notices that their friend is in trouble and needs help, or that friend has shared a video they should watch, etc, all leading to different types of cyber crime, typically none of which is new, but rather malware or phishing schemes that have been around for years.

In relation to Senator Feinstein's comments, APT also refers to the government or specifically specific groups within the government to get either information, or to cause problems with the network or infrastructure.

So what can any organization do about APT? The key is to remain diligent about web security, and of course that involves the Secure Web Gateway and the proxy, the subject that's the prime purpose of this blog. Keep up to date with the latest technologies for security for your proxy with anti-malware, real-time ratings, SSL inspection, and other newer threat detection mechanisms. The other side of this is of course, web application security for your existing web servers. This is the purpose of the reverse proxy or web application firewall, a topic for discussion in a future blog post.

Wednesday, September 15, 2010

Sizing and the Secure Web Gateway

Sizing always seems to be a touchy issue when talking to appliance vendors. It seems to be no different with Secure Web Gateway vendors who seem to exaggerate their numbers of supported users on a platform. Whether the vendor is Blue Coat, McAfee (Secure Computing), Websense or Cisco (Ironport), the number of users claimed always seems to surprise me, perhaps less for some vendors than others.

Let's start with an obvious culprit. Websense is the newest to the appliance game, introducing their V10000 appliance a little more than a year ago. As the name seems to implicate and what some Websense documents allude to, is the support for 10,000 users. 10,000 users for a system that's running a virtual operating system hosting multiple virtual images. It sounds high to me, but I could be wrong, so I'd like to hear from any real users as to whether they're able to get 10,000 users on a system in proxy mode (not SPAN port as we've discussed elsewhere in this blog)

Next we move to McAfee, who smartly decided to remove the user count labels when they introduced their high end WG5000 and WG5500 platforms. But if you look at their lower end platforms, like the WW1100E the old marketing materials claimed support for 8000 users. Yes, 8,000 users on a low-end platform. Once again, it leaves an IT professional to wonder if there's anyone who gets that many users on a WW1100E or even WG5000 or WG5500 (the new high-end platforms), and once again in a proxy deployment?

Cisco's Ironport offering isn't any less boastful about their claims. For their high-end S660 platform, they claim over 10,000 users, leaving the sub-10,000 user count to their mid-range platform the S360. Without too much effort it's easy to tell that the Cisco, McAfee and Websense offerings are all Dell based platforms, so wonders how much juice can you put under the covers? So I ask once again, if anyone has a deployment of over 10,000 users in proxy mode for a Cisco Ironport S660?

Blue Coat is the one company that actually manufactures their own hardware rather than using something off the shelf, so maybe they can actually juice up their platforms a little more than the competition, but enough to claim unlimited users? Yes, that's right, for the high-end platforms, the user count claim is unlimited. But if you actually ask, you'll find the more reasonable numbers. They're just not published on the website. And yes their numbers are in the same range as Websense, Cisco and McAfee's high-end platforms.

So, I'm asking readers of this blog, which numbers do you believe in? Which of you are supporting 10,000 users on a single platform deployed as a forward proxy? Help us out and let's see who's exaggerating and who's telling the truth.

Malware quieter, more malicious

From: http://www.post-gazette.com/pg/10255/1086646-467.stm

Did you notice we haven't heard from Melissa lately? Or any of her evil friends -- trojan horses and viruses that we used to see all the time.

That, according to David Perry, global director of education at Trend Micro, is because the types of malware that we're seeing these days (or not seeing) are different and more sinister.

Mr. Perry, whose participation in the antivirus market dates back to 1990 with the Peter Norton Co. and McAfee, tells us the majority of the malware attacks on computer systems and networks in recent memory have been trying to run silently, unlike those of Melissa's ilk which tried to get your attention to prove their creators were macho megalomaniacs.

Mr. Perry quotes statistics showing there are more than 200,000 new malware threats everyday; and on one date the number of new threats even reached 500,000. That compares with three to five per month that sprang up in the 1990s.

The real issue is not the number of threats but the stealthiness of the threats, the rapidness with which they attack each system then leave and the actual intention of the malware developers.

He suggests that organized crime has a major stake in these new threats, and that the sole purpose is to steal your vital information, including your credit card numbers, your passwords and any other information that can be used to steal your ID.

That's enough to scare me.

But I've always been a little bit more cautious about protecting my data than most people. Unfortunately, there are only so many things we can do to protect ourselves. Mr. Perry says there are so many places a hacker can get into your system that it is impossible to protect it in the traditional way.

Hackers use key loggers, session recorders and screen scrapers to find out and record what you're typing. They get to your data from inside your system, not from the outside, and they don't necessarily use it immediately -- if at all. He suggests that they're more likely to sell the data in massive doses than to use it themselves.

That's where organized crime comes in. According to Mr. Perry, it could be two years before they use that stolen credit card number they took from you; and the stolen data might've passed through several hands before somebody finally uses it. He says there's even a market on the Internet to a buy and sell this type of data.

His company, Trend Micro, is so convinced traditional antivirus techniques will no longer put a dent into the threat, that on Sept. 8, the company was scheduled to release a consumer product to keep you from going to dangerous websites instead of just trying to fix a problem on your system.

Those websites might only be dangerous because a bad guy turned them against you -- not because the website operator is evil. That makes it hard to protect you against yourself.

Mr. Perry's new service puts up warnings that a silent threat might be awaiting you if you continue to the site. It lets you go there if you really want to. Just keep your fingers crossed.


Read more: http://www.post-gazette.com/pg/10255/1086646-467.stm#ixzz0zXgq6sTl

Tuesday, September 14, 2010

'Here You Have' Spam Outbreak Leaves Enterprises Reeling

From: http://www.esecurityplanet.com/news/article.php/3903241/Here-You-Have-Spam-Outbreak-Leaves-Enterprises-Reeling.htm

While the source of the "Here you have" virus that spread like wildfire throughout corporate email servers around the globe may have finally been shut down, enterprise IT departments are still dealing with the fallout from one of the most virulent and fast-moving viruses in recent history.

According to security researchers at Cisco's (NASDAQ: CSCO) IronPort division, the "Here you have" email worm peaked Thursday when the sneaky "download-and-run" malware accounted for a staggering 14.2 percent of all spam messages circulating the Internet -- or more than 42 billion individual spam messages.

Security software firm Sophos, which identified the malware as W32/Autorun-BHO, said the U.K.-based website responsible for spreading the Windows-based virus was shut down sometime Friday, bringing an end to the upheaval.

In the interim, however, the "Here you have" virus clogged corporate email servers around the world. Researchers at Cisco and Sophos reported that outbreak disrupted email systems at large companies, including Comcast, Wells Fargo, Coca-Cola and Google.

Despite its destructiveness, the "Here you have" virus is actually just a new take on an old socially engineered malware scam, according to Sophos security analyst Graham Cluley -- a scam that conjures up memories of the infamous Anna Kournikova spam that devastated email servers some eight years ago.

Similarly to the Kournikova virus, the new W32/Autorun-BHO works by duping users into clicking on an infected email with either the "Here you have" or "Just for you" subject titles. The email then provides a link to what it promises are important PDF documents or pornographic WMV videos.

Instead, those foolish enough to click on the link got an executable file that immediately tried to shut off any legitimate security software applications running on their computer or mobile device.

The virus then sends spam messages to all of the contacts in the victim's address book, helping it to spread geometrically and giving "Here you have" even more currency because the next crop of potential victims thought the infected email they received had been sent by a trusted contact.

"The intention of the attack appears to be to steal information," Sophos security analyst Graham Cluley wrote in a blog post. "The malware downloads components and other tools which extract passwords from browsers (Firefox, Chrome, Internet Explorer, Opera), various email clients, and other applications. [It's] clearly sensitive information, which you don't want falling into the wrong hands."

Blast From the Past

Considering that 90 percent of all email traffic -- 300 billion messages a day -- is spam, the fact that this one variant of spam managed to account for more than 14 percent of the total spam traffic attests to the surprising appeal of what are really old-school malware tactics, security researchers said.

In May, another particularly virulent worm weaseled its way into the Yahoo Messenger community, infecting an unknown number of users after tricking them into clicking on a link masquerading as "foto" or "fotos" from someone in their contact list.

Email viruses of this type figure to become more and more common as hackers continue to find opportunities in social networks, such as Twitter and Facebook where large pools of like-minded or similarly interested potential victims gather to share pictures, links and ideas.

"That doesn't surprise me, as this is something of a return to the malware attacks of yesteryear where hackers didn't care whose computers they hit," Cluley wrote. "They just wanted to infect as many as possible." "Worms like this don't discriminate, deciding their next victim purely by scooping up a list of its next targets from the user's email address book," he added.

Tuesday, September 7, 2010

Overblocking in a Web 2.0 World

In today's Web 2.0 world, the concept of a web page is kind of a misnomer. Most are already aware that a single web page is actually made up considerably more embedded links, and in some cases hundreds of embedded links providing information to display a single unified page. Any one of those hundreds of links could contain malware, while the other hundreds of links could contain information necessary to complete an organization's users task or job at hand.

For most, the secure web gateway is the device in the network that handles protecting the end-user from the malware by blocking the specific embedded URL that contains the malware. But often it's not that simple. In today's sophisticated attacks, which take advantage of SEO (Search Engine Optimization) poisoning and link farms, where tens of thousands of links are created to a few handfuls of malware sites, it's hard for security to devices to determine where the good sites are and which sites may just contain an embedded link to a malware site (often hosting good content at the same time). The challenge is of course not to block the websites that only contain links to other links that contain links to malware. Blocking at a level that's too high will inadvertently cause end-users to miss content they need, and produce an effect known as over-blocking.

One of the problems with over-blocking is that it may make your secure web gateway solution look like it's doing a great job, but without doing the work to see if there really is a malware on a link, you don't know if your solution has just prevented you from reaching important information. While over-blocking is a well known problem it's harder to determine whether it's occurring until an end-user complains about access to information. Part of the test in finding out whether your secure web gateway solution is over-blocking is finding out what they do to prevent over-blocking. Understanding how the solution works, and what causes a site to be blocked is the first step in preventing over-blocking in a Web 2.0 World.

Monday, August 30, 2010

SSL Proxy And Anti-Malware Go Hand In Hand

At first glance you may think that an SSL proxy and anti-malware have nothing to do with each other. While each serves its own purpose in a Secure Web Gatway architecture and deployment, they are actually crucial to each other's success in protecting an organization's network from web based threats, malware, and cybercrime.

Let's start with the SSL proxy. Having a web proxy without an SSL proxy used to be quite common, as few pages other than financial services had encryption protection. There was a time when a web proxy that handled pages in the clear covered almost all the web pages of interest for an organization's policy compliance. Today, webmail offerings routinely use SSL encrypted logins and even maintain SSL sessions for email sessions. SSL is also used today wherever personal credentials are entered, whether it's a social networking site, shopping or other entertainment site. Because of the widespread use of encryption on website, making sure you use an SSL proxy (basically a proxy that can inspect and enforce policy around the contents within an SSL session) is more important than ever.

At one time SSL proxy and inspection was important mostly for DLP (Data Leakage Protection). Organizations used it to make sure confidential data wasn't leaving the organization through secure encrypted sessions. Today it's important to make sure web threats don't enter through secure encrypted connections.

The key to providing security with SSL inspection is an anti-malware or anti-virus scanner. Traditional methods of content inspection like URL databases, and real time rating in the cloud are hampered by the user credentials usually associated with SSL. URL databases rely on generally available URLs and not the custom URL generated after a user credential is verified. Real time rating systems suffer from the same problem, as they rate pages they can reach, and secure web gateways generally don't send users credentials across the internet to a real time rating system to get the full contents of the URL, as this would generally be considered a security risk or even a security breach.

This leaves the only way to ensure the content within an SSL encrypted page is safe, is to use an anti-malware or anti-virus scanner locally at the proxy to inspect the data the SSL proxy is receiving as it's coming in from the Internet. If the anti-malware program detects any threats, the proxy can block the downloads and infected web pages. Without SSL proxy and anti-malware, threats buried in encrypted pages would pass into the organization's network.

A company using an SSL proxy should of course follow prudent guidelines around privacy concerns with regard to content found in SSL sessions. A common approach is to set up the SSL proxy to bypass visits to financial sites, so as not invade a typical end-user's privacy.

Any organization concerned with web threats, needs to implement an SSL proxy if they haven't done so already, and tied to that implementation needs to be a plan to get anti-malware scanning to be a standard part of the web gateway.

Monday, August 23, 2010

Why Do You Need a Proxy in the Secure Web Gateway?

In today's web based world, web threats are at an all time high. Whether it's an iFrame injection, a drive-by download, phishing, or just plain malware, end-users browsing the web are at a higher risk than ever before of having their computers and identities compromised. It's no surprise then, that more companies than ever are looking to implement a Secure Web Gateway, or updating their existing gateways.

For many the term Secure Web Gateway is interchangeable with the term proxy, but not all Secure Web Gateways are proxies. It's an important distinction to make, because originally Secure Web Gateways were implemented to enforce corporate or organizational policy (such as preventing shopping on the web during office hours), but in today's threat laden world, having a proxy in the Secure Web Gateway is more important than ever in the battle against cybercrime, malware and phishing.

By specifically requiring a proxy in the Secure Web Gateway, you're guaranteed to terminate all traffic at the proxy. This means when a client makes an http request, it goes to the proxy and the proxy responds acting as a server accepting the connection. The proxy then acts like the client and makes the same request the client made to the destination server. By forcing all traffic to terminate at the proxy, the proxy has the ability to inspect all the traffic flowing through the device, and makes sure no traffic flows through without inspection.

Alternative Secure Web Gateway deployments, such as TAP (or SPAN port) deployments, have the gateway sitting off to the side of the network, observing traffic as it passes by, instead of intercepting and terminating all traffic. These deployments have the specific flaw that malware or other threats can get by, if the gateway doesn't detect the threat in time or doesn't send out a TCP reset packet in time to disrupt the flow of traffic. It's not a guaranteed security mechanism. It may have worked okay for enforcing organizational policy, but it's definitely not a safeguard against web borne threats.

Today, the only true way to have full protection against web threats is to intercept all web bound traffic using a proxy architecture. Depending on the proxy vendor, your proxy device may also intercept and protect other forms of internet bound traffic like, ftp, telnet, and other protocols. Protecting your mission critical network from inbound threats should be a top priority, and you need to make sure your Secure Web Gateway processes all the traffic by using a proxy architecture.

Sunday, August 22, 2010

IE6 Still Used By 20% Despite Flaws

From: http://www.informationweek.com/blog/main/archives/2010/08/ie6_still_used.html

Summary

According to Zscaler's latest State of the Web report, one in five business users continue to browse with IE6, despite its being nine years old and far less secure than newer browsers.

Article

The latest State of the Web report from Zscaler holds plenty of interesting -- and scary -- insights into the threat environment, but one item in particular caught my eye.

According to the security firm's tracking of Web traffic, 20% of business users are continuing to use Microsoft's Internet Explorer 6, despite the browser's being seriously out of date, and seriously risky. While Zscaler's IE6 numbers are higher than some, it's clear that a large number of users continue to stick with the old browser, despite every encouragement -- not to mention need -- to upgrade or replace it.

At nine years old and counting, IE6 has been out of date and risky for a while. Over a year ago, Matt McKenzie described IE6 as a "Ford Pinto with a leaky fuel tank", and it's hard to top that -- except for the fact that another year has gone by and the leaky vehicle is still being driven by a lot of people.

Zscaler does see IE6 usage -- and Explorer usage overall -- declining. But the persistence of the browser says much about the dilemma of employees sticking with flawed, dangerous technology.

It's pretty easy to come up some obvious explanations for the browser's longevity. If you or your employees are still running IE6, ask yourself if any of these apply:

Budget: Your company bought machines with IE6 installed, and have never upgraded either software or hardware.

Inertia: IT is not a primary focus your company; if it's working, keep working with it.

Good enough technology: One of the non-security knocks against IE6 is that it's not up to the demands of the Brave New Web -- hence the number of apps that are dropping support for IE6. If your company isn't interested in the new Web, why should you invest the time required to upgrade your browsers?

Lack of awareness: A subset of both inertia and good enough technology, this one probably explains a large per centage of the holdouts. It's the same thing that explains why so many security remain unpatched long after patches are released.

Stubbornness: The best example of this is the UK government's recent decision to stick with IE6, explaining that it's "more cost effective in many cases to continue to use IE6 and rely on other measures, such as firewalls and malware scanning software, to further protect public sector Internet users." In other words, put a catchpan under the leaky gas tank, but keep on driving.

None of the explanations make much more than surface-level sense today.
With browsers of every variety rapidly becoming the attack vector of choice, holding onto an old, flawed browser that's unprepared for either today's threats or today's Web.

Time to retire IE6 from your business, if your business is one of the ones still running it.

And while you're at it, you might take a an age and ability check on all the other software you run.

Saturday, August 21, 2010

Malware Threats At Record High

From: http://www.itproportal.com/portal/news/article/2010/8/17/mcafee-warns-malware-threats-record-high/


McAfee has said that it has registered record levels of new malware threats over the first half of 2010.

The security company said in its quarterly report that it had been indexing 55,000 new malware threats every day during the first half of 2010.

McAfee suggests that the increased threat of malware is due to the rise of technological progress.

The company's director of security, Greg Day, suggested that the rise in malware could also be due to development of sophisticated malware generation tools.

He said that an increase in malware allowed hackers to exploit individuals and enterprises in new ways.

In a statement, Day said: “Now [there] is what we call malware generation tools [which] let you create different kinds of threats, but they can do it in hundreds and thousands of different guises.”

McAfee advised users to apply ethical hacking techniques to check the strength of their network and applications and fix flaws before rogue hackers can exploit their vulnerabilities.

Read more: http://www.itproportal.com/portal/news/article/2010/8/17/mcafee-warns-malware-threats-record-high/#ixzz0x0YxHVqM

Friday, August 20, 2010

$1 Million Stolen from UK Bank Accounts by New Zeus Trojan

From: http://www.spamfighter.com/News-14952-$1-Million-Stolen-from-UK-Bank-Accounts-by-New-Zeus-Trojan.htm

$1 Million Stolen from UK Bank Accounts by New Zeus Trojan

Researchers at M86 Security have disclosed about another botnet built on the Zeus Trojan named Zeus v3 which means swiping bank information from unnamed financial accounts in the UK. This ongoing attack is known to have stolen £675,000 or nearly $1.1 Million from customers during July 5, 2010 - August 4, 2010.

Security firm M86 has elaborated that in addition to the usage of Zeus v3 Trojan, cyber criminals are using the Phoenix and Eleonore exploit kits. These kits exploit victims' browsers to inject Trojans into their PCs.

The process began with corrupt banner advertisement placed on legal websites. Those users who followed the advertisement would be taken to a corrupt website containing exploit kits. Further, the users would be taken to the exploit kit and their computer systems would become infected, said the security researchers.

With the help of Zeus v3 on the victims' PCs, their online bank account and details such as date of birth, Id and a security number would be transferred to the command and control server. As the user entered the site's transaction portion, the Trojan would report to the C&C (command and control) system and receive new JavaScript to replace the original JavaScript from the bank. Once the user submitted the transaction form, more data was sent to the C&C system instead of the bank.

Bradley Anstis, Vice President of Technical Strategy for M86, threw light on the latest sophisticated attack. Anstis said that the initial infection where the exploit kit compromised the victim's machine used a number of vulnerabilities listed in the paper by them. One of the vulnerability was an Internet Explorer which affected IE v6 & v7," as reported by news.cnet on August 10, 2010.

However, one of the six or so vulnerabilities which could have been used for the initial infection. The victim machine is tested by the exploit kits for each one so as to get a successful infection.

In another statement, Anstis has concluded that the only way of protecting against such attacks within the browser is to implement real time code analysis technologies which can detect and block malicious commands proactively, reported by computerweekly on August 13, 2010.

» SPAMfighter News - 18-08-2010
Bookmark and Share

Thursday, August 19, 2010

Do You Really Need Anti-virus in Web Filtering?

The topic of anti-virus or anti-malware in the Secure Web Gateway is an issue that many organizations face when trying to deal with the onslaught of threats from the web. Traditionally web gateways include features such as proxy capability and URL filtering, and maybe even real time web page categorization to help with securing the organizations users from threats from the web and enforcing corporate policy.

The argument that many organizations face is that they are already paying for URL filtering, real time web rating, and a anti-malware program on the desktop. Why do they need to spend more to get anti-malware and anti-virus on the Secure Web Gateway? What, if any benefit does the end-user and organization get from adding anti-malware to the Secure Web Gateway, when the end-user is supposedly already protected by their desktop anti-virus program?

These are good questions, and ones the organization needs to look at carefully when making the decision to add anti-malware to the Secure Web Gateway. While an organization may have anti-malware programs running on their end-users desktops, they generally have little control over how often these programs are updated, or if they are even running (some end-users may have even disabled them to gain performance on their laptops or desktops). Would you trust your corporate security to your end-users? By relying on their desktop anti-malware, you're essentially relying on the end-user to make sure they are practicing the best cyber hygiene.

Maybe as as administrator you already trust that URL filtering and dynamic real-time rating are protecting you from web threats. While these two technologies are great as part of a layered defense mechanism, they each serve a distinct role in protecting the end-user and the organization. A URL filtering database provides the quickest way to provide feedback to an end-user on whether a website is safe. Known bad websites will already be categorized as malicious.

A website URL not found in the URL filtering database moves to the next layer of defense, typically cache of URL information found at a vendor's website, and then if still not found, a real time rating system that examines a website in real time to determine the category of the website. All these mechanisms drive toward determining not only the category of a website, but also whether or not that website has malicious content and then blocking it (or an embedded URL that contains malicious content) as appropriate.

All this sounds great, and many administrators may be lulled into thinking they are completely protected by this layered defense mechanism. But in reality they should add one more layer of defense, and that's the anti-malware/anti-virus scanning at the Secure Web Gateway. Why is this necessary? Think about what happens when a known good website gets attacked, and ends up with an infection of malware or virus. There's going to be a period of time before a URL database, or URL cache or even real-time rating system picks up on the infection. Until that information is updated, that website is being passed on as a "good" site. An anti-malware program at the gateway would add that layer of defense that would catch that the site has been infected and prevent the end-user from downloading a virus in that short window of vulnerability.

No infection is a good infection, and layered defense is a necessity with today's web threats. Make sure you close an additional window of vulnerability by adding anti-malware/anti-virus to your Secure Web Gateway. Adding a different vendor's anti-virus from your desktop anti-virus also adds another layer of protection, so that if one anti-virus vendor misses a threat the other has a greater chance of recognizing it.

Wednesday, August 18, 2010

A Powerpoint Presentation Explaining Proxies

In case you were looking for more materials on explaining why you need a proxy, and what a proxy does, I discovered a new powerpoint presentation along with complete speakers notes here:

http://www.authorstream.com/Presentation/smrutiprayag-475558-proxy-servers/

It goes through the basics and explains why and how to use a proxy for web and email.

SSL Proxies

Found this recently, a topic we've talked about in the past here at The Proxy Update, and one gaining more relevance all the time.

From: http://www.infosecblog.org/2010/08/ssl-proxies/


Because it is open outbound from the firewall, many applications send their traffic across port 80 to avoid firewall issues. This has led to port 80 being called the Firewall Traversal Exploit. Port 443 then is the Secure Firewall Traversal Exploit because it allows traffic out in an encrypted fashion.

Because its encrypted users bypass protections in place for HTTP to download viruses, access forbidden sites and leak confidential information. This is limited only by the availability of SSL sites. In recent years webmail like GMail has gone to full SSL sessions. Bad guys can easily set up SSL as well. Without a SSL proxy, all you can do to address these concerns is block by IP address. IP addresses change frequently and are less likely to be categorized in a URL block list.

When you use a SSL proxy, the web traffic is terminated at the proxy server and a new request is made to the remote server. The client browser uses a certificate from the proxy to secure data during the first leg of this transaction. This will result in a certificate error if you don’t deploy the proxy’s self-signed certificate as a trusted root. Because the client never sees the certificate of the remote server, the user does not get information about the trustworthiness of that certificate. For this reason it is necessary to either block all bad certificates or make sure your SSL proxy can pass on that certificate info when the certificate is expired or does not chain to a trusted root.

The SSL proxy can use the hostname (CN) in the server certificate to make a URL categorization decision to intercept or tunnel the traffic.

Because you can intercept based on URL categorization, you could choose to intercept (and block) only websites that are in your blocked categories. This is the simplest implementation of a SSL proxy. It blocks site that wouldn’t have been blocked before and it doesn’t interfere with anything else. If a computer doesn’t have your certificate in their trusted root, it’s not that bad because the site would have been blocked anyway.

A slightly more intrusive step is to also intercept webmail sites. Webmail sites have the potential to download malware although the site itself is valid. By intercepting the site the download is scanned by the antivirus layer. A related idea is intercepting all uncategorized sites so they can be scanned.

A full implementation involves intercept everything not categorized as a financial site. It is not recommended to intercept financial websites for obvious reasons.
Intercepting everything allows you to scan all downloads for viruses. The main drawback is you’ll have more issues with web applications not conforming to HTTP standards.

I think the simplest option of only intercepting websites classified in categories on your block list is best. It provides additional security without potential for complications. You’d have to make a security decision for your own environment.

There are security considerations to intercepting traffic. When you only intercept a site to block it you don’t have sensitive data but as you intercept other categories, you must take care. Sensitive data may now be exposed in clear text. You may want to think twice about what you are logging and caching. If any offbox analysis is performed you need to encrypt the connection and make sure nothing is on the remote box.

A lot of attacks occur over the web and its important to provide the best defense. It’s no longer good enough to ignore 443/TCP.

Tuesday, August 17, 2010

Five billionth device about to plug into Internet

From: http://www.networkworld.com/news/2010/081610-5billion-devices-internet.html?source=NWWNLE_nlt_daily_am_2010-08-17

Sometime this month, the 5 billionth device will plug into the Internet. And in 10 years, that number will grow by more than a factor of four, according to IMS Research, which tracks the installed base of equipment that can access the Internet.

On the surface, this second tidal wave of growth will be driven by cell phones and new classes of consumer electronics, according to an IMS statement. But an even bigger driver will be largely invisible: machine-to-machine communications in various kinds of smart grids for energy management, surveillance and public safety, traffic and parking control, and sensor networks.

Earlier this year, Cisco forecast equally steep growth rates in personal devices and overall Internet traffic. [See "Global IP traffic to increase fivefold by 2013, Cisco predicts"]

Today, there are over 1 billion computers that regularly connect to the Internet. That class of devices, including PCs and laptops and their associated networking gear, continues to grow.

But cellular devices, such as Internet-connected smartphones, have outstripped that total and are growing at a much faster rate. Then add in tablets, eBook readers, Internet TVs, cameras, digital picture frames, and a host of other networked consumer electronics devices, and the IMS forecast of 22 billion Internet devices by 2010 doesn’t seem farfetched.

Slate Wars: 15 Tablets That Could Rival Apple's iPad

The research firm projects that in 10 years, there will be 6 billion cell phones, most of them with Internet connectivity. An estimated 2.5 billion televisions today will largely be replaced by TV sets that are Internet capable, either directly or through a set-top box. More and more of the world’s one billion automobiles will be replaced by newer models with integrated Internet access.

Yet, the greatest growth potential is in machine-to-machine, according to IMS President Ian Weightman. Research firm Gartner named machine-to-machine communications one of the top 10 mobile technologies to watch in 2010. And almost exactly one year ago, Qualcomm and Verizon created a joint-venture company specifically to support machine-to-machine wireless services.

"This has the potential to go way beyond industrial applications to encompass [such applications as] increasingly sophisticated smart grids, networked security cameras and sensors, connected home appliances and HVAC equipment, and ITS infrastructure for traffic and parking management," Weightman said in a statement.

Monday, August 16, 2010

IPV6 Proxy

Mention IPv6, and I believe most people will know what you are referring to. But that's all, they basically will be limited to a general recognition of what you're talking about. Starting with the mid 90s of the last century after the birth of IPv6, the discussion on the topic of IPv6 has been hot, but amongst the majority of users, there are few who can really use IPv6 applications!

Where does the problem lie? On the one hand, the application and deployment of IPv6 itself is small; and on the other, or even more crucial is the interoperability between traditional IPv4 applications and the IPv6 network, which make IPv6 networks and applications basically their own little islands: the traditional IPv4 users do not have access, and currently the development of new IPv6 networks is not widespread.

The root cause of this situation, was in fact, an incompatible IPv6 protocol with existing IPv4 technology. 51CTO.com reported previously, the Internet Engineering Task Force (IETF) admitted that they committed a fatal error in the IPv6 standards, in not providing in the existing Internet protocol a way to have IPv4 backward compatibility. An IPv6 senior architect from Blue Coat in the United States, Qing Li, a security expert, in an interview, said: "The lack of real applications for IPv6-oriented solutions, forces enterprises to consider in the end either the use of an appropriate relocation program [to IPv6 networks] or to conduct a comprehensive upgrade. There are huge and comprehensive upgrade costs, enough to make most companies look and stop."

In other words, the current key issue is how to solve getting users from IPv4 to IPv6, basically a transition and convergence between the two. Mr Li said: "To smooth the shift to IPv6, requires a safe migration of business applications and services strategy." Well, is there a solution to this problem?

The answer is yes. It is called an IPv6 Proxy.

The IPv6 proxy is a proxy between IPv4 and IPv6 networks, allowing for transition and conversion with the use of a single piece of equipment. Mr Li explained that the intelligence behind the IPv6 proxy, is that it allows the the user access between networks, without the need for an address translation, administrators do not have to rewrite applications or upgrade IT Infrastructure, and IPv6 applications can be accessed with IPv4 networks today. Services and data in both the IPv4 and IPv6 environments can now interact smoothly today, and have an an easy migration tomorrow.

In other words, the IPv6 proxy acts as the client and server, regardless of whether the client is IPv4 or IPv6, so that an IPv6 client agent without special equipment can communicate to an IPv4 server. And similarly, when a traditional IPv4 client communicating with an IPv6 server application sends a request, the IPv4 to IPv6 proxy will be able to intercept the request, and then convert the request to an IPv6 request to the server; when the server returns the information, it also goes through the IPv6 proxy, eventually returning to the client.

How does this conversion work? Mr Li explained that on the Blue Coat IPv6 proxy, the TCP protocol is used in the suspension and re-packaging of packets. As we all know, TCP protocol works in the fourth layer on top of the IP protocol negotiation, while other applications are built on TCP or UDP protocols. The Blue Coat IPv6 proxy issues the TCP receive client request to the one side, and then analyzes the application layer protocol request, and then accesses security policies that meet the request of the new negotiation with the packet, and issues appropriate requests to the server, while clients also receive a normal response.

In the client view, the issue of requests and responses are normal as expected (either IPv4 or IPv6), without any address translation or other intervention on the client, so everything is transparent. And on the server, the client appears to send a request that meets and conforms with the IPv6 protocol itself. For existing enterprise IPv4 applications, the IPv6 proxy device can issue IPv4 requests from IPv6 clients, allowing for a more secure IPv6 backbone network, and allow IPv4 applications the use of IPv6 applications and services as well.

With the IPv6 proxy the challenge of IPv4 and IPv6 interaction appears to be solved. But a company may ask: with conversion between the two protocols, are there any safety concerns? Moreover, with the deployment of such a device, will the network transmission speed and quality be affected? Will a proxy affect existing applications? Will it greatly increase the network cost?

Mr Li explained that on the IPv6 agent equipment and network, companies may be faced with a challenge. He sums them up into four areas, that need to be examined using the initials "SUVA".

First, companies have the issue of content security control (Security), that is, how to use IPv6 and the proxy to ensure that enterprise applications meet the business management system and compliance needs, and also eliminate the need for re-certification of existing security policy.

Followed by availability (Usability), making sure the product application is convenient and reliable, and the original applications look transparent and convenient.

Third is the application of visibility (Visibility), which is very high in terms of IPv6 proxy requests. The proxy with all the applications through it require visual management to help network administrators with the application flow and a comprehensive understanding of Web content and control. Achieving effective application performance monitoring and adjusting network resources is key.

Finally, application layer protocol, where a business needs to accelerate applications (Acceleration). Acting as an intelligent device, IPv6 proxy should also provide acceleration capability so the network and application capabilities provide the best experience for end-users.

It should be said that these challenges also contributed to the development of the IPv6 proxy, an important reason for the relative difficulty in bringing the product to market. Mr Li explained that, Blue Coat's first-generation IPv6 proxy required R & D to spend five years developing the product. The key point of their success in the intelligent IPv6 proxy access to client requests, is not just a simple address translation, but a number of management issues, including the analysis of applications, auditing and security management, and content caching strategies such as acceleration. In the end, for network access, only if the functionality of a device meets the enterprise security management requirements, will the proxy request will be issued, and optimized.

Editor's note: This article is translated from Chinese, and the grammar has been corrected, but we haven't taken the extensive time necessary to make this article flow and read like fluent English.

Tuesday, August 10, 2010

Top Ten Web Malware Threats

From: http://www.esecurityplanet.com/print.php/3897476

Websites that spread malware may be leveling off, but Web-borne malware encounters are still growing. According to a 2Q10 Global Threat Report published by Cisco, criminals are using search engine optimization and social engineering to become more efficient, luring more targeted victims to fewer URLs.

Using IronPort SenderBase, Cisco estimated that search engine queries lead to 74 percent of Web malware encounters in 1Q10. Fortunately, two-thirds of those encounters either did not deliver exploit code or were blocked. But that means 35 percent of Web-borne exploits are still reaching browsers, where they try to drop files, steal information, propagate themselves, or await further instructions.

Browser phishing filters, anti-malware engines, and up-to-date patches can play a huge role in defeating malware reaching the desktop. However, to find unguarded vectors and unpatched vulnerabilities, let's look at how today's most prevalent Web malware works.

#10: Last on Cisco's list of 2Q10 encounters is Backdoor.TDSSConf.A. This Trojan belongs to the TDSS family of kernel-mode rootkits, TDSS files are dropped by another Trojan (see Alureon, below). Once installed, TDSS conceals associated files and keys and disables anti-virus programs by using rootkit tactics. Removing TDSS from a PC is difficult; using up-to-date anti-malware to block the file drop is a better bet.

#9: Ninth place goes to an oldie but goodie, Mal/Iframe-F. Many variants use this popular technique: inserting an invisible HTML iframe tag into an otherwise legitimate Web page to surreptitiously redirect visitors to other Websites. Hidden iframes may elude detection by the human eye, but Web content scanners can spot them and Web URL filters can block redirects to blacklisted sites.

#8: In a dead heat with Iframe-F is JS.Redirector.BD, a JavaScript Trojan that also redirects users to Websites they had not intended to visit. Like some other members of the large JS.Redirector family, this Trojan tries to evade blacklist filters by using obfuscation techniques like dynamically-generated target URLs.

#7: Nosing past Redirector.BD is Backdoor.Win32.Alureon. Alureon refers to a family of dynamic, multi-faceted Trojans intended to generate revenue from a victim's Web activities. Malware components within each instance vary, but Alureon has been seen to alter DNS settings, hijack search requests, display malicious ads, intercept confidential data, download arbitrary files, and corrupt disk drivers. In fact, threat reports indicate that Alureon has been used to drop TDSS onto infected PCs.

#6: Tied for middle-of-the-pack is Worm.Win32.VBNA.b. VBNA implants itself in a user's Documents and Settings folder, adding a Run key to the registry. Thereafter, VBNA auto-launches and propagates itself to neighboring PCs via writable fileshares. VBNA also displays a fake virus infection warning to trick users into purchasing fake anti-malware (which is often just more malware). Scare tactics like this appear to be on the rise, preying upon uninformed users.

#5: Next up is JS.Redirector.AT, another member of this Trojan family famous for redirecting users to other Web sites. Destination sites reportedly have displayed porn, phished for confidential data, and implanted malware on the victim's PC. One way to inhibit these Trojans is to disable JavaScript execution – if not in the browser, then in Acrobat Reader to block JavaScript hidden in PDFs. Exploits targeting Adobe PDF, Flash, and Sun Java vulnerabilities were particularly hot in 1H10.

#4: Taking fourth place is Mal/GIFIframe-A, a sibling to the afore-mentioned Iframe-F. GIFIframe-A also uses iframe tags, but this family of malware exploits iframes that have been injected into files encoded using popular graphic formats like GIF and JPG. When a user visits an infected Website and attempts to load the graphic, the injected iframe is processed, executing attacker-supplied code.

#3: At third, representing three percent of 2Q10 encounters, is a keylogger called PSW.Win32.Infostealer.bnkb. Dozens of Infostealer variant Trojans exist, targeting a wide variety of institutions and their customers. All work by capturing keystrokes, scanning for specific Web transactions, and stealing usernames, passwords, account numbers – typically those associated with online banking.

#2: A new JS.Redirector variant took second place in 2Q10: JS.Redirector.cq. Like other family members, this Trojan uses malicious JavaScript to redirect users. In this case, users find themselves at Websites that pretend to scan for viruses, then download fake anti-virus code, no matter where the user clicks on the displayed window. But how do legitimate Websites get infected with JS.Redirector in the first place? One reportedly common vector: SQL injection.

#1: First place goes to the now infamous Trojan downloader Exploit.JS.Gumblar. According the Cisco, Gumblar represented 5 percent of all Web malware in 2Q10, down from 11 percent in 1Q10. Gumblar is a downloader that drops an encrypted file onto the victim's system. Gumblar runs that executable without user consent, injecting JavaScript into HTML pages to be returned by a Web server or displayed by a user's Web browser. The injected JavaScript usually contains an obfuscated exploit; early scripts downloaded more malware from gumblar.cn – thus giving this Trojan its name.

Cisco's 2Q10 list was generated by IronPort, which uses Sophos, Webroot, and McAfee malware detection engines. Other vendors use different naming conventions and publish slightly different lists that represent other monitored data sources. And next quarter there will be new lists -- probably composed largely of variants.

The purpose of such lists is not therefore to tell you which malwares to scan for. That job falls to continuously-updated anti-malware defenses, installed on desktops, servers, and gateways. Instead, use this list and others like it to identify and proactively fight trends that are likely to persist or grow and target your Web servers and users tomorrow.

Wednesday, August 4, 2010

The 2010 Verizon Data Breach Report Is Out

From: http://isc.sans.edu/diary.html?storyid=9283

This year's data breach report continues this valuable narrative. This years report is based on a larger case sample than in previous years, thanks to a partnership with the United States Secret Service, who contributed information on a few hundred of their cases this year. Many of the findings echo those of previous years (excerpts below).


Who is behind Data Breaches?
70% resulted from external agents
48% caused by insiders
11% implicated business partners
27% involved multiple parties

How do breaches occur?
48% involved privilege misuse
40% resulted from hacking
38% utilized malware
28% involved social tactics
15% comprised physical attacks

What commonalities exist? (this was the interesting section for me)
98% of all data breached came from servers
85% of attacks were not considered highly difficult
61% were discovered by a third party
86% of victims had evidence of the breach in their log files
96% of breaches were avoidable through simple or intermediate controls
79% of victims subject to PCI DSS had not achieved compliance

Come on! Not only don't folks seem to be implementing some basic protections, but when they're told that they've been compromised (in their log files), no-one is listening! I guess this isn't much different than in previous years, but it'd be nice to see a positive trend here.

I'm not sure that I believe the low numbers for government data breaches (4%). I guess the report can only summarize data from cases that are "seen" by the incident handlers.

Find the full report here ==> http://www.verizonbusiness.com/resources/reports/rp_2010-data-breach-report_en_xg.pdf

Take a few minutes to read it over coffee this morning - I found it a good read, and just about the right length for that first cup !

Tuesday, August 3, 2010

Companies slow to create social media rules

From: http://www.detnews.com/article/20100802/BIZ04/8020349/1001/Companies-slow-to-create-social-media-rules


Companies beware: Employees aren't the only ones who should worry about a social media backlash.

Studies show that creating social media policies for employees helps companies prevent problems, but most firms would rather ignore the issue.

"I think it's hard to avoid," said Dean Pacific, a labor employment lawyer for Warner Norcross & Judd in Grand Rapids.

"I can't imagine telling anyone, 'Just completely stay away from social media. Pretend like it's not there.'"

But that's exactly what many companies are doing.

A Manpower survey of 34,400 companies worldwide found that 20 percent have a social media policy. Experts say ignoring the problem will only make it worse.

Instead, Pacific encourages companies to craft a social media policy that applies to both employees and management.

"We're still as a society trying to figure out what the limits and the boundaries are," said Michael Fertik, CEO and founder of ReputationDefender, a worldwide online reputation management and privacy company based in Redwood City, Calif.

"I don't think it's established yet because the technology's moving a lot faster than the law is."

Most companies with a social media policy reserve the right to monitor employee activity on work computers. Nothing workers write -- on e-mail, Facebook, Twitter or any other social network -- is private.

Some companies take it a step further by blocking social media sites on work computers.

ScanSafe, an online security company that provides a website blocking service to thousands of global corporations, found that 76 percent of its clients block social media sites. These companies place a greater focus on hiding sites like Facebook and YouTube than online shopping, weapons and alcohol.

"They view it similar to pornographic or gambling or shopping sites," Pacific said. "The big concern is that they're time wasters -- they're productivity busters."

Ford Motor Co. is among the businesses that realizes it can't prevent employees from using social media. So, the Dearborn automaker asks workers to explain that the views expressed are their personal opinions -- not those of the company.

"We're not authorizing an employee to be a spokesperson," said Scott Monty, who manages Ford's social media programs. "We want to make it clear that if they choose, they can talk about Ford, but they have to do so from a personal perspective."

Most companies also forbid employees from sharing trade secrets.

Pacific said many leaks are inadvertent. Employees who leave a company and are looking for new jobs might mention on their LinkedIn resumes that they were responsible for "$190 million in sales." Others who are still with the company might use Facebook to tout a cool product they are developing.

"It may not be bad intentions at all -- and it probably isn't -- but it can lead to harm to the company if this confidential information gets out there," Pacific said.

Other times the information that gets leaked isn't true at all. "But it still hurts the reputation of the company, because it looks real," Fertik said.

One of the biggest problems companies face are privacy violations. A growing number of employers check sites like Facebook and MySpace to screen job candidates, but experts say companies should be careful not to invade their privacy.

"Just on the very first page, you're going to see their sex, date of birth, marital status and their religious views," Pacific said. Asking about a job candidate's age, marital status and religious views in an employment interview is illegal.

Bosses who spy on employees by sneaking onto their private profiles and digging through personal information could jeopardize the company's finances. And bitter candidates who discover they didn't get a job because of something on their profile might file a discrimination lawsuit.

So Pacific recommends companies take precautions by having one employee collect information from the site and another review the information.