Welcome to the Proxy Update, your source of news and information on Proxies and their role in network security.

Showing posts with label Glossary. Show all posts
Showing posts with label Glossary. Show all posts

Thursday, August 19, 2010

Do You Really Need Anti-virus in Web Filtering?

The topic of anti-virus or anti-malware in the Secure Web Gateway is an issue that many organizations face when trying to deal with the onslaught of threats from the web. Traditionally web gateways include features such as proxy capability and URL filtering, and maybe even real time web page categorization to help with securing the organizations users from threats from the web and enforcing corporate policy.

The argument that many organizations face is that they are already paying for URL filtering, real time web rating, and a anti-malware program on the desktop. Why do they need to spend more to get anti-malware and anti-virus on the Secure Web Gateway? What, if any benefit does the end-user and organization get from adding anti-malware to the Secure Web Gateway, when the end-user is supposedly already protected by their desktop anti-virus program?

These are good questions, and ones the organization needs to look at carefully when making the decision to add anti-malware to the Secure Web Gateway. While an organization may have anti-malware programs running on their end-users desktops, they generally have little control over how often these programs are updated, or if they are even running (some end-users may have even disabled them to gain performance on their laptops or desktops). Would you trust your corporate security to your end-users? By relying on their desktop anti-malware, you're essentially relying on the end-user to make sure they are practicing the best cyber hygiene.

Maybe as as administrator you already trust that URL filtering and dynamic real-time rating are protecting you from web threats. While these two technologies are great as part of a layered defense mechanism, they each serve a distinct role in protecting the end-user and the organization. A URL filtering database provides the quickest way to provide feedback to an end-user on whether a website is safe. Known bad websites will already be categorized as malicious.

A website URL not found in the URL filtering database moves to the next layer of defense, typically cache of URL information found at a vendor's website, and then if still not found, a real time rating system that examines a website in real time to determine the category of the website. All these mechanisms drive toward determining not only the category of a website, but also whether or not that website has malicious content and then blocking it (or an embedded URL that contains malicious content) as appropriate.

All this sounds great, and many administrators may be lulled into thinking they are completely protected by this layered defense mechanism. But in reality they should add one more layer of defense, and that's the anti-malware/anti-virus scanning at the Secure Web Gateway. Why is this necessary? Think about what happens when a known good website gets attacked, and ends up with an infection of malware or virus. There's going to be a period of time before a URL database, or URL cache or even real-time rating system picks up on the infection. Until that information is updated, that website is being passed on as a "good" site. An anti-malware program at the gateway would add that layer of defense that would catch that the site has been infected and prevent the end-user from downloading a virus in that short window of vulnerability.

No infection is a good infection, and layered defense is a necessity with today's web threats. Make sure you close an additional window of vulnerability by adding anti-malware/anti-virus to your Secure Web Gateway. Adding a different vendor's anti-virus from your desktop anti-virus also adds another layer of protection, so that if one anti-virus vendor misses a threat the other has a greater chance of recognizing it.

Friday, February 26, 2010

Web Gateway Deployment Methodologies - SPAN Port Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ve discussed these as we've explain each methodology over the last few days. Today's article is the last in series and covers SPAN port deployments, sometimes referred to as TCP reset.

SPAN Port Deployment

The last deployment methodology we’ll discuss is that of SPAN (Switched Port ANalyzer) port deployment. Sometimes this method is also called TCP Reset deployment, as it relies on TCP resets to implement the policy of the web gateway. A web gateway is deployed by attaching it to a SPAN port on a switch. (See Figure 4) Unlike the other three deployment methods, which process the web traffic and implement policy based on the network response the web gateway issues, a web gateway implemented in a SPAN port, implements policy by issuing a TCP reset to the client system to prevent it from completing the download of offending content..

SPAN Port Advantages

SPAN port deployments allow larger scale deployments because a monitoring mode deployment typically uses less resources than inline, explicit or transparent which must actively process traffic. SPAN port deployment is useful if you think your hardware might be undersized for your needs.

SPAN Port Disadvantages

One of the disadvantages to a SPAN port deployment on a switch, is that it does not see all the traffic. Corrupt network packets, packets below minimum size, and layer 1 and 2 errors are usually dropped by the switch. In addition, it’s possible a SPAN port can introduce network delays. The software architecture of low-end switches introduces delay by copying the spanned packets. Also, if the data is being aggregated through a gigabit port, a delay is introduced as the signal is converted from electrical to optical. Any network delay can be critical, since TCP reset is used to implement policy.

SPAN ports also have an issue when there is an overload of traffic, typically the port will drop packets, and there will be some data loss. In a high network load situation, most web gateways connected to a SPAN port will not be able to respond quickly enough to keep malware from spreading across a corporate network.

Recently a Network World article (Dec 7, 2009) discussed the TCP reset method used by web gateways to implement policy:

Too clever by half, perhaps –TCP RESET has several drawbacks.

First, a cyber attacker can cause a "self-inflicted DoS attack" by flooding your network with thousands of offending packets. The TCP RESET gateway responds by issuing two TCP RESETs for every offending packet it sees.

The TCP RESET approach is worthless against a cyber attacker who uses UDP to "phone home" the contents of your sensitive files.

The gateway has to be perfectly quick; it has to send the TCP RESET packets before the client (victim) has processed the final packet of malware.

Ergo – deep and thorough inspection of network traffic before it's allowed to flow to the client is the most effective way to stop malware.

… In other words, don't just wave at the malware as it goes by.
--Barry Nance, Network World, Dec 7, 2009


Conclusion

While there are four common deployment methodologies to choose from when implementing a secure web gateway, there’s really only three clear common choices for IT departments. The choice between inline, explicit and transparent, will have to be done based on the needs and resources of the organization and the IT department. While SPAN port deployment and TCP reset may seem like a reasonable solution, there are enough drawbacks that a serious web gateway deployment should avoid this methodology.

Thursday, February 25, 2010

Web Gateway Deployment Methodologies - Transparent Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ll discuss these as we explain each methodology over the next few days. We've already examined Inline and Explicit deployments. Today we'll look at Transparent deployments.

Transparent Deployment


Transparent Deployment allows a web gateway to be deployed in any network location that has connectivity (similar to explicit mode deployment), (See Figure 3) reducing the need for a configuration change to the network to implement. In addition, there’s no overhead of having to configure each end-user’s system, since the routing of HTTP and HTTPS traffic is typically done by the router or other network device. Transparent deployment is often used when an organization is too large for an inline deployment and does not want the added work and overhead needed for an explicit deployment. Most transparent deployments rely on Web Caching Communications Protocol (WCCP), a protocol supported by many network devices. Alternatively it’s also achieve Transparent Deployment using Policy Based Routing (PBR)

Transparent Deployment Advantages

The main advantages of deploying a web gateway in transparent mode, include: narrowing the amount of traffic processed by the proxy and the ability to more easily implement redundancy of the web gateway. In addition transparent deployment does not require changes to end-user systems.

Transparent Deployment Disadvantages

Transparent deployment does depend on the availability of either WCCP or PBR, and support for these by the web gateway. Typically support for these is available only on more sophisticated web gateways. Configuration can be trickier, as there typically needs to be compatibility between the supported versions of WCCP between the router and the web gateway. More in-depth network expertise is required to implement and deploy a transparent mode deployment, typically not a problem in larger organizations, but may be an issue for smaller organizations.

Tomorrow we'll look at SPAN port deployments

Wednesday, February 24, 2010

Web Gateway Deployment Methodologies - Explicit Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ll discuss these as we explain each methodology over the next few days. Yesterday we looked at Inline deployments and today, we'll examine Explicit deployments.


Explicit Deployment


Explicit Deployment is fairly common when a web gateway is deployed in a larger network; and the design of the network requires there be no single point of failure. Explicit deployment allows the web gateway to be located on the network in any location that is accessible by all end-users and the device itself has access to the internet. (See Figure 2) Explicit deployment is done through the use of an explicit definition in a web browser. To make this kind of deployment easier, an administrator can use PAC or WPAD files to distribute the setup information for the explicit proxy to the end-users browsers.

When using explicit deployment it is extremely important to have your firewall properly configured to prevent users from bypassing the proxy. The firewall needs to be configured to allow only the proxy to talk through the firewall using HTTP and HTTPS. All other hosts/ip addresses should be denied. In addition, all other ports need to be locked down to prevent end-users from setting up their own proxy internally that tries to go out to the internet via HTTP on a port other than the commonly used ones (80 and 443).

Explicit Mode Advantages

The main advantages of deploying a web gateway in explicit mode, include: narrowing the amount of traffic processed by the web gateway (you can limit traffic to only HTTP based traffic) and the ability to more easily implement redundancy for web gateways in your environment. Explicit mode deployment for an environment without an existing web gateway is also less disruptive to the network, as the web gateway can be placed anywhere in the network that is accessible by all end-users and the web gateway can reach the firewall to the internet.

Explicit Mode Disadvantages

The disadvantage of explicit mode deployment is typically around IT administrative overhead, as each end-users system needs a configuration change in order to work properly. While there is some reduction in this overhead with PAC and WPAD, any misconfigured end-user system will result in a helpdesk call and require a sysadmin to rectify the situation for the end-user. Explicit mode deployment also relies heavily on a properly configured network and firewall. Any hole in the network or firewall can be exploited by a knowledgeable end-user to bypass the web gateway as discussed earlier.

Tomorrow we'll look at Transparent deployments

Tuesday, February 23, 2010

Web Gateway Deployment Methodologies - Inline Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ll discuss these as we explain each methodology over the next few days. For today's article we'll focus on inline deployments

Inline Proxy Deployment

Inline deployment is probably the simplest and easiest to describe. Smaller deployments, like branch office scenarios, typically use inline deployment, due to the ease of deployment and absolute security level that it provides.

With an inline deployment, the web gateway is placed directly in the path of all network traffic going to and from the internet. (See Figure 1). In this scenario, all network traffic will go through the web gateway device. If you choose this deployment methodology, make sure your web gateway is capable of bypassing network traffic that you don’t want processed by the web gateway. In many instances, you can choose to either “proxy” or “bypass” a specific protocol. If you “proxy” the protocol, that means the web gateway will terminate the traffic from the client to the server locally, and then re-establish a new connection acting as the client to the server to get the requested information.

Inline Deployment Advantages

The upside of an inline deployment is the ease of deployment, and the guaranteed assurance that all web traffic will flow through the device. There is no chance of a user bypassing controls, as long as the device is inline and in the only path available to the internet. It’s less likely an end-user can bypass a web gateway that is deployed using inline deployment, as all internet bound http traffic will be processed and handled by the web gateway. Inline is generally considered the most secure deployment methodology and the way to go if security is the primary concern.

Inline Deployment Disadvantages


The downside of an inline deployment is a single point of failure. Even with technologies, like “fail to wire”, which allows all traffic to flow through when a device fails, many organizations are uncomfortable with a single device in the data stream to the internet. Any partial failure of the device could cause an outage, which is the main concern for this deployment. For a small organization, or a branch office a short disruption is probably not as large a concern as it is for a larger organization which may view internet accessibility to be mission critical.

Another disadvantage with inline is a necessary requirement of managing all the protocols that are proxied by this web gateway (a side effect of this being the most secure method of deployment). Because the web gateway is inline, any other protocol (ftp, CIFS, etc), will either need to be proxied or bypassed (for protocols that the web gateway cannot handle) by the web gateway. The IT admin will need to administer this list and the handling of each protocol used by the organization.

Tomorrow, we'll look at Explicit Deployments.

Friday, January 29, 2010

SEO Spreads Risk

This topic has been discussed quite a bit lately, by almost all the major security vendors, but we really haven't talked about it here on this forum, and it's probably a good time to remind everyone that one of the newest forms of attacks in the web space is around poisoning the search results from Google, basically using Google's Search Engine Optimization (SEO) techniques to get infected web pages highly ranked in search results, especially around current topics that are high interest (recently the Haiti Earthquake, the iPad announcement, Toyota's recall, and President Barack Obama's State of the Union address have all been targets).

What's really devastating about these poisoned search results, is that the end-user isn't likely to realize they are getting infected, since the search result, may refer eventually to a well known site, like CNN. But the referring link will contain some piece of malware; infecting the end-user's machine.

All the more reason IT admins need to make sure there's a proxy in place acting as a Secure Web Gateway with the right anti-malware software, and for those traveling users on laptops, there's some local software client protecting web browsing as well.

Thursday, December 10, 2009

A Separate AV/Malware Box?

For those admins who are looking to refresh their proxy architecture, and looking at the various vendors out there for Secure Web Gateways, you may be wondering whether there's a benefit to having the AV (anti-virus) and malware scanning on a separate box. The 600 lb gorilla in the marketplace for web gateway appliances, Blue Coat Systems uses a two box architecture, while most of the competitors, use a single box design running the AV and malware scanning on the same box as the gateway.

What's the advantage to the second box? In reality the big gain is scale and throughput. By offloading to a second box, you can handle much bigger throughput and you can handle many more connections. If neither of these is a concern for you, you should also consider when an AV or malware engine goes into a CPU usage storm, whether you want it to affect the other users using the web gateway. There are files designed to cause AV engines to go into infinite processing loops and if your AV or malware engine hasn't been tuned to detect these, an AV CPU spike will cause web downtime for your end-users if you aren't using a separate box for AV and malware scanning.

If web access isn't mission critical to your organization, and you aren't concerned with scale and throughput, a single box solution may be the answer. But before you go that route, make sure you price out the two box solution, and make the right decision based on all the factors and features available to you.

Wednesday, October 28, 2009

Cookies sound sweet, but they can be risky

USA TODAY ran a story this week with the above title. Catchy for the typical reader, but has so much more meaning when you're an IT manager. For the uninitiated, everywhere you go on the Internet, you leave behind small footprints called cookies.

From the USA Today article:

Cookies track where you have gone online and are stored on your hard drive. The websites you visit tap into those cookies so they can tailor promotions to you or retrieve data such as your credit card information. Every site you visit also registers your numerical IP (Internet protocol) address and can track information associated with it. Your IP address contains information like your hometown, but not your name.

Cookies come in two types: first- and third-party. First-party cookies are kept only by the site you visit and any affiliated properties, such as the company's Facebook fan page. This information is not shared with other websites and is generally not considered worrisome. Third-party cookies are those shared across various websites; for example, if you click on certain ads or search for a car on sites that share such cookies, your information goes to a far larger audience.


USA Today does offer some advice to protect yourself when browsing the web:

•Check website privacy policies. Most sites state what information is gathered and how it is used. Some will let you opt in or opt out of the collection process. Check the policy especially if you plan to register on a site.

•Disable cookies. Onyour Web browser, you likely have an option to disable all cookies or those that apply to third-party uses. Disabling first-party cookies means websites won't likely have your credit card or password information stored anymore. Greve has disabled third-party cookies on her computer and "sleeps better at night" because of it, she says.

•Remove cookies regularly. You can set your browser to automatically clear your entire browsing history and cookies, or do it manually. But Greve says even though cookies are removed from the computer, "Once you put your information out, it's out there, and it's going to get to stores in one way, shape or form."

•Consider installing an "anonymizer." These services hide your IP address wherever you go, but Greve warns there have been "phishing" attacks — e-mails that try to get personal information — through some of these.

•Use a proxy server. These devices, which are intermediaries between networks, allow you to browse in private.


Of course that last recommendation is one I heartily endorse. Anyone managing a network should consider putting a proxy server to help protect the end-users browsing the web. In addition make sure that proxy server is up to date on its URL database, real time categorization, and malware scanning software.

Monday, October 5, 2009

PAC File Creation

For those of you that manage Proxies and are looking for a good guide to writing PAC files, there exists a website that covers almost everything you'd need:

http://www.returnproxy.com/proxypac/

In addition to tips and tricks, it also has some sample files for you to use, a way to test files, and troubleshooting tips.

Thursday, September 24, 2009

How SSL-encrypted Web connections are intercepted

I've written plenty of articles in the past about SSL and proxies. SSL is an important piece you shouldn't forget when securing web access from your organization. Searchsecurity.com published an article this week on how SSL-encrypted web connections can be intercepted, from the legitimate use (proxying and filtering), to illicit interception. It's good long article explaining the different technologies involved. Click here for the full article on the link on the title above.

Thursday, September 17, 2009

Choosing the Right Anti-Malware/Anti-Virus for Your Proxy

I've talked a lot about having an scanning engine on your enterprise proxy implementation. You need this to make sure you're scanning any webpages your end-user visits for malware or viruses.

This of course begs the question which anti-malware or anti-virus software should you be using with your proxy. It's a tough question if the proxy is new to your network, or if you haven't run an anti-malware package with your proxy before.

Almost every organization out there is already running anti-virus and anti-malware for email and desktops. Deciding which package to run for web, depends on what you're trying to accomplish. If you need an extra layer of protection, and the desktop package already scans web pages, you probably want to run a different vendor on the proxy so that you get an added layer of defense.

The other thing you should look into, is how much CPU each vendor uses, and how easy it is to write policy to determine what gets scanned, so that not everything is scanned (e.g. radio streams, video streams should probably not be scanned). In addition cost, reputation, and actual catch rates will be factors in your decision. There's one site out there, avtest.org that rates the catch rates for the various anti-virus and anti-malware vendors and may be a good starting point for research. Of course not all vendors will agree with the results from this site, and it's also important to research false positive rates as well. The right answer for anti-malware and anti-virus packages will be different for each organization, so be sure to do your research when you select the package to work with your proxy.

Wednesday, February 25, 2009

Other Proxy Types

I've talked about forward and reverse proxies in this blog, and commented on anonymous proxies as well as how proxies are deployed. I came across a list of other proxy types (or terminology used to describe proxies, and thought I'd share this information).

You may see references to four different types of proxy servers that are available on the Internet (as opposed to the forward or reverse proxies that enterprises use):

Transparent Proxy - This type of proxy server identifies itself as a proxy server and also makes the original IP address available through the http headers. These are generally used for their ability to cache websites and do not effectively provide any anonymity to those who use them. However, the use of a transparent proxy typically allow end-user to get around simple IP bans. They are transparent in the terms that the end-user's IP address is exposed, not transparent in the terms that the end-user is unaware of using it.

Anonymous Proxy - This type of proxy server indentifies itself as a proxy server, but does not make the original IP address available. This type of proxy server is detectable, but provides reasonable anonymity for most end-users.

Distorting Proxy - This type of proxy server identifies itself as a proxy server, but make an incorrect original IP address available through the http headers.

High Anonymity Proxy - This type of proxy server does not identify itself as a proxy server and does not make available the original IP address.

There are risks to using proxies freely available on the Internet. In using a proxy server (for example, anonymizing HTTP proxy), all data sent to the service being used (for example, HTTP server in a website) must pass through the proxy server before being sent to the service, mostly in unencrypted form. It is therefore possible, and has been demonstrated, for a malicious proxy server to record everything sent to the proxy: including unencrypted logins and passwords. By chaining proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind.

The bottom line of this is to be wary when using free Internet proxy servers, and only use proxy servers of known integrity (e.g., the owner is known and trusted, has a clear privacy policy, etc.), and never use proxy servers of unknown integrity. If there is no choice but to use unknown proxy servers, do not pass any private information (unless it is properly encrypted) through the proxy.

It's a good idea to keep your end-users educated about the corporate proxy as well as the dangers of free proxies that they may be attempting to use to bypass your corporate proxy.

Tuesday, December 30, 2008

Back to Basics

It's been a while since we've discussed what a proxy is. This recent article (which I've linked the source above and here (http://www.itecharticles.com/what-is-a-proxy-server/)
gives a nice overview of what a proxy is and what it does. It was a nice reminder for us to get back on the topic of the proxy.


What is a Proxy Server? (From iTechArticles.com)

by admin on December 27, 2008

A proxy server is a computer that services requests from its client computers by forwarding client requests to the outside servers and also acting as a gateway to any incoming data from an external server. Client computers will usually have to go through the proxy server while requesting a web page, a file or some other resource that is located on a remote server. The proxy will then connect to the specified server and act on behalf of the requesting client. Depending on security settings and other restrictions that have been into place, the server may alter a request that has been made to a remote server. On the other hand it may also alter the response of the remote server, before forwarding it to the client. At other times it may need to contact the remote server in order to service a request. In such a case, the proxy will act as a cache server by storing previously accessed web pages and resources so that when they are requested later on, they can be retrieved much faster.

The most common types of proxy servers are gateways. A gateway is a type of proxy server that handles data coming across a number of platforms that are running on different protocols. Also called protocol converters, these gateways pass unmodified replies and requests between outside servers and clients and can be placed at various points on a local area network as well as across the internet. On the internet, gateways convert packets formatted in one protocol like TCP/IP to another format like AppleTalk before sending it to a client computer. Gateways can either be hardware or software. In most cases however, they are implemented by having software in a router.

In order for a proxy server to act as a gateway they must understand the protocols they are going to handle. Gateways will usually be network points that separate one network from another. Commonly referred to as nodes on the internet, a gateway node acts as the end-point of one network and the beginning of another. Gateways are commonly installed between networks in a company or by internet service providers to their clients. To improve security, gateways will also server as firewall servers by having software installed on them. Most proxies will have 2 IP addresses, one that serves the local area network and the other serving the wide area network like the internet. When properly configured, these proxies ensure network security and efficiency is maintained.

Wednesday, December 17, 2008

What's So Hard About SSL?

As the proxy administrator, it's possible you haven't thought about SSL at all, or maybe opposite is true, and it keeps you up at night trying to figure out how to deal with SSL encrypted web traffic. Either way it's definitely something that you should be worrying about, whether it's a reverse proxy, or a forward proxy that you have implemented.

The reverse proxy scenario is of course easier in that you're protecting a known set of websites. If some of them happen to be SSL encrypted your proxy should easily have a method to allow you to install SSL certificates for those websites, and give protected access to those trying to reach internal websites.

The forward proxy scenario is the more complicated one, and probably the one keeping you up at night. When you use a forward proxy to protect your end-users from threats they may be exposed to from external websites, it's easy to check downloads that come across the proxy in the clear and check the URL's and scan the content for viruses and malware.

The hard part is when your users are browsing encrypted sites. If your proxy is bypassing or tunneling encrypted sessions, then any malware that's hosted on those encrypted sites makes it to your network without any URL blocking or virus and malware scanning. The reverse proxy scenario discussed above where you load up the website's SSL certificate is unmanageable for a forward proxy, as the scope of possible websites is enormous (and no proxies would be able to support that many SSL certificates).

There are a couple of possible answers to this possible dilemma. The first obvious one is to block access to all SSL encrypted sites. Obviously this doesn't work for everyone, especially those organizations that are using SaaS (Software as a Service) sites like salesforce.com, which depend on SSL for security. The next possibility is to just block SSL to well known categories that you don't want on your network to begin with (possibly banking and shopping). This still leaves the question about SSL to the remaining sites. Here's where having a fully featured proxy is important. Any proxy worth its salt today will have the ability to intercept SSL traffic and inspect the contents of the encrypted session. How is that done? Typically the proxy will create its own SSL certificate that's signed by the proxy as opposed the CA (certifying authority). This means of course you'll have to have your end-users trust the proxy, or pre-install that trust on systems as you stage them for end-users (otherwise end-users will get pop-ups warning them of insecure SSL sites). This allows the proxy to inspect SSL content by interception the session, and applying policies like URL blocking and virus and malware scanning, and DLP (data leakage protection) inspection as well.

There is a problem here and that is related to privacy. While intercepting SSL encrypted sessions for SaaS sessions that are company or enterprise related is fine, there's a slightly touchy subject if you intercept a user's personal banking session which has used the user's PIN or other passwords. If you do decide to intercept SSL, it's a good idea to block personal use categories like banking or shopping (to prevent capturing personal data), or at least put up an acceptable use policy page that explains that SSL sessions are intercepted, and private transactions should not be done over the corporate/enterprise network (so any access an end-user does is at their own risk). Most proxies are capable of setting up a click through page that explains acceptable use each session an end-user has to access the internet.

There's of course one other possible scenario related to proxies and SSL, and that's when your proxy is also your WAN Optimization device. We'll tackle that one in a future blog post.

Wednesday, August 27, 2008

Web 2.0 Content

More and more of the web is two-way published content of text, images and video. The days of single web page loads and a URL rating for a site or page are evaporating. Now sites have multiple feeds, often with real-time content, search string variables from the user or cookies, plus user authenticated content. Often referred to as Web 2.0, this display of wide array of content based on user authentication presents specific challenges to IT administrators trying to implement a Secure Web Gateway solution in the form of a proxy.

In a proxy solution, real-time rating services help by rating the entire URL (URL + parameters supplied to the web site) for complex web sites. Blue Coat Systems has DRTR, a real-time rating service which they claim provides a 7-8% coverage benefit over a static URL database coverage percentage. Blue Coat expects this to increase as Web 2.0 content continues to expand. URL databases are moving to hybrid solutions that provides hidden malware host detection, real-time cloud services, local real-time rating services and traditional ratings. Make sure your proxy supports these latest features.

Friday, August 15, 2008

Apparent Data Types

One of the more common attacks in the email world is starting to filter over into the web world. In the email world, viruses are often distributed as the payload on an email message. Typically this payload is an executable, which means it has to be suffixed with .com, .exe, .bat or some other executable suffix. As end-users have gotten more savvy, hackers have started trying to obscure their attachments so that the end-user is fooled into thinking the file is a data type that's not an executable.

The easiest way of doing this is taking the extension suffix on a file and changing it to something that the typical end-user will want to click on, download and execute. A typical example of this would be of course to take an executable and disguise it as an image file or video clip.

In reality it isn't that easy to deceive an end-user into executing a virus, as changing the suffix on a file would make it not capable of being executed. The problem comes about when files are shuffled around the Internet, they are usually encoded or packed, using BASE64 or zip or some other encoding mechanism. This encoding can claim to have a jpg file (for example using MIME-Content-Type using MIME encoding), but the actual file when unencoded may actually have a name like "image.jpg.exe". For most people this is problematic as Windows by default hides the extension, and most end-users would think they are looking at a file called "image.jpg"

While many anti-malware programs will block known viruses and malware, a new variant could get past the malware scan. This is where a proxy with better security mechanisms could save your organization. Some proxies are capable of detecting mismatches in apparent data types in encoded files. This will help ensure that policies that block exe files or other executables actually gets enforced. Make sure your proxy is one that understands how to look for a mismatch in apparent data type.

Wednesday, July 30, 2008

Securing Outlook Web Access

Reverse proxy is one specialized deployment of the proxy architecture. For the typical organization, securing OWA (Outlook Web Access) is probably one of the most common concerns around IT administrators, who secure their end-users access to corporate resources.

Giving end-users access to OWA from the Internet is always a concern, as it requires opening up an internal server with valuable corporate resources to the World Wide Web. There's of course even greater concern, as OWA runs on an Exchange server on a Windows Server platform, a platform that needs to be secured before it can be offered on an Internet link.

The reverse proxy fills this security concern neatly as an architecture that can not only secure OWA, but provide performance improvements for the OWA server at the same time, using the caching capabilities of the reverse proxies for static items like graphics.

In selecting a reverse proxy for securing your OWA or other internal application, look for SSL enabled security for reverse proxies. Not all reverse proxies support SSL, and SSL proxy capability is a requirement when talking about securing internal corporate resources. Additional benefits a proxy can offer included redirection to SSL login pages, timing out of logged in sessions, and other security enhancements to web access.

The reverse proxy is a necessity in any corporate deployment of OWA access from the Internet, and can offer similar benefits for any other web enabled application that end-users are accessing from the Internet. Be sure to look for the right security features for your application when deciding on which reverse proxy to deploy.

Friday, July 18, 2008

HoneyGrid

For those of you who have been dealing with email problems, spam and viruses, you're probably already familiar with the term honeypot. Honeypots have been in use for some time to collect spam and virus samples on the internet. The idea of course is to get samples out in the wild as early as possible in order to create patterns to catch the spam or virus.

For web filtering and the proxy the problem is slightly different. How do you determine there's a malicious website or a new website containing some content you don't want to get on your network? The security companies have been hard at work creating a new method of getting this information as quickly as possible. Similar to the honeypot technology, the "honeygrid" uses resources out on the internet to get as many samples as quickly as possible. Larger security companies have the ability to tap their deployed network of users to help gather information around when a malicious site has been found.

As an example, Blue Coat Systems calls their "honeygrid", WebPulse. It's comprised of all the deployed ProxySG systems running their webfiltering software and also all the sites that have deployed their free filtering software, K9, which according to the website currently has over 650,000 deployed copies worldwide. This force of web surfers world wide helps Blue Coat determine when a new page has been created, and if the content is suspicious (based on real time rating and virus scanning) gives them an opportunity to get a first look at examining the content of the page for malicious content.

When looking at threat protection for your proxy, don't forget to ask about the latest - honeygrids and whether you've got the force of web surfers working for you.

Friday, July 11, 2008

Proxy Avoidance

For the typical IT administrator trying to handle end-users that are trying to get around the corporate proxy, it can be a frustrating and never-ending task. New proxy avoidance sites seem to pop up every day, so it's extremely difficult to keep a blacklist of proxy avoidance sites up to date.

This is one instance where real time dynamic rating can help. Most IP addresses used as a proxy avoidance site have live web pages at that IP address that explain how to use that IP address for proxy avoidance.

These web pages can be dynamically rated by those proxies that have the ability to do real time rating. A good engine should categorize these IP addresses as proxy avoidance sites, a classification that should be blocked in the corporate proxy. As long as you're using transparent proxy, all http should be going through the proxy regardless of the proxy IP addresses used by the end-users and blocked using policy set on the proxy itself to block access to proxy avoidance sites.

For protection against proxy avoidance, do the due diligence and make sure your corporate proxy has the best protection against proxy avoidance sites, and can detect new ones as they become available.

Tuesday, June 17, 2008

Object Cache and Pipelining

The proxy is the ideal place to have an object cache. This should make sense intuitively. You have multiple users accessing the internet from the same location. Many of them will go to the same web sites, so caching objects from those sites locally, means more bandwidth available to all users to access the internet. It also means faster access for users to content when their requests match objects already in the cache. Objects can be anything stored on a web page, documents, images, video, or audio files.

For objects that aren't a cache hit (a first time visit by any user to a website), pipelining can help speed up access to that page. By retrieving objects in parallel instead of serially (where you have to wait for one object to finish loading before fetching the next), you can load the contents into the cache and the destination browser much more quickly than a traditional fetch.

Object caching does have its denigrators. Depending on the implementation, object caches have been criticized for containing stale data. In today's on-demand 24x7 world, having the up-to-the-minute information is key. Likewise, your object cache needs to have algorithms that help it detect when data changes. Technologies like adaptive refresh, keep track of the types of data in an object cache, and can determine based on the data type how often that data type is likely to change, and check the server for "freshness" of the data, even if there hasn't been a recent request for that data.

With the right proxy there's no reason not to have an object cache and all the benefits of caching. Look for adaptive refresh and pipelining to help speed your internet access.