Welcome to the Proxy Update, your source of news and information on Proxies and their role in network security.

Friday, August 28, 2009

Focusing on applications: So what?

Network World recently discussed in their WAN Newsletter a survey that indicated the vast majority of IT organizations have taken the time and effort it takes to identify a small set of applications that are business-critical.

Network World also commented:

We believe that an effective Network Operations Center (NOC) is critical to the success of the IT organization. We also believe, however, that the NOC needs to evolve to where its focus is a bit less on the network and a bit more on applications. With that thought in mind, the survey respondents were asked if they agreed with the statement that their organization had redesigned and/or restructured their NOC to focus on ensuring acceptable application performance. Over half of the survey respondents either agreed, or agreed strongly with that statement.


Not surprisingly more network vendors are increasingly looking more wholistically at applications rather than network components. A quick trip to sites like Blue Coat Systems, F5, and Citrix will verify this trend towards advertising and developming messages and solutions around application delivery.

Thursday, August 27, 2009

New SQL Attack Compromises over 50,000 Sites

ScanSafe warned yesterday that over 50,000 legitimate sites have been hit by a new SQL injection attack. The attack inserts a malicious iframe on the sites. Users that visit the sites will get a "drive by download" of what ScanSafe is calling

"a potent Trojan cocktail consisting of backdoors, password stealers and a downloader."


Smaller businesses seem to be targeted as they don't have the aggressive support staff of enterprises.

This latest attack is a good reminder of why it's important to have a proxy that blocks embedded URLs so you can safely go to sites that have been hit by attacks like these. Just make sure your proxy is up to date with its URL database and malware scanning software.

Tuesday, August 25, 2009

Virtual Appliance Slow on the Uptake

Virtualization has been a hot topic this year, but Network World reported yesterday that few companies have begun to adopt virtual appliances in place of the appliances they use to run their IT infrastructure.

From Network World:

In the virtual appliance model, developers package applications with an operating system to run as either a hardware or software appliance. "The value proposition is so clear cut from vendor and customer perspectives. We build, you install, it works," says Bernard Golden, CEO of HyperStratus, an advanced IT consulting firm. "I thought virtual appliances would be a big deal, taking off really fast, but they haven't."


But even if companies do start adopting virtual appliances, Network World also has a warning for IT administrators:


As great as these plug-and-play solutions can be, Boucher says users do need to be careful about file size. "A virtual appliance package can get pretty large, so being able to manage that closely is important."

Likewise when vendors turn the physical into the virtual, Metzler warns. "You need to know how to manage and secure these, and how they're going to perform."

Wednesday, August 19, 2009

Misconfigured Proxy Causes Security Alarm

Neohapsis and TweetyCoaster both reported this week that there was a newly discovered security leak in Blue Coat ProxySG systems. But in reality it turned out both these reports were a false alarm, caused by a mis-configured proxy. Apparently there was a policy installed on this particular proxy which allowed users to bypass authentication.

This is a good reminder, that our proxies are extremely powerful devices, and as admins, we need to verify the policy that we install on them on a regular basis to make sure the policies are still doing what we want and need them to do.

Tuesday, August 18, 2009

The Open Group Enters Cloud Security Debate

Software as a Service (SaaS) seems to be gaining some momentum and popularity over traditional methods of network services and security. But there remains an inherent concern around the security of services in the cloud. Network World reported this week that the Open Group has formed a subgroup to develop standards that promote effective and secure use of cloud computing technology and services.

From the article:

There are already several other industry efforts toward addressing identifiable issues with cloud deployments and services, but this latest is trying to make a unique contribution.

While other efforts look at cloud architecture and try to address technology concerns, Open Group’s Cloud Work Group is looking first at what businesses need from cloud providers, then creating standards that can help meet those needs.
With such standards established, it will become easier for businesses to sort out cloud services by finding those that do and do not meet those standards.

For example the group is looking at cloud security. First it sets the business implications of security breaches – misuse of confidential data and the loss of customer good will.

Then it looks at how cloud services might help alleviate those pain points. Transferring responsibility of security to a provider frees up corporate resources and may result in more secure infrastructure because providers have more expertise and resources to focus on the problem than a business might.

The requirement is for providers to secure their services and guarantee performance via service level agreements.
The group will work on standards that address these requirements and others such as risk management, cost and business agility. Its goal is to write a document called a Business Scenario for Enterprise Cloud Computing that will sum up its findings and recommendations. Because of the approach The Open Group is taking with this initiative, it could produce a sharply focused set of recommendations.

Friday, August 14, 2009

Blue Coat Webpulse system protects users from Clampi

If you've been following the Clampi virus, you know it has some pretty disturbing traits. There's good news in a recent knowledgebase posting on Blue Coat's website, shows that anyone running their ProxySG software with a Webpulse subscription is already blocking users from going to Clampi "call home" sites.

Another reason to keep up to date with those anti-malware subscriptions.

Thursday, August 13, 2009

Twitter Attack Relatively Small Potatoes

A new article in Network World today claims the Twitter attacks of the past week have been relatively minor compared to the overall security attacks that occur on the Internet every day.

Network World quotes analysts at Arbor Networks who say that Arbor's ATLAS 2.0 Internet monitoring system last week estimated that DDoS traffic directed at Twitter was not in the multi-gigabit range that characterizes most large attacks.

From the article:

"We didn't see any evidence of a multi-gigabit surge towards twitter," he says. "Twitter has publicly said that they saw an increase in traffic but they haven't said anything about how much traffic yet."

In contrast, Labovitz notes that while Twitter was being attacked last week, an Asian ISP came under siege from a large DDoS attack that generated more than 30Gbps of DDoS traffic. According to Labovitz, such punishing attacks are commonly deployed against e-commerce sites, as well as sites that specialize in pornography and online gambling.


In case you're interested in Arbor's ATLAS system, here's some additional information:

Arbor's ATLAS Internet monitoring system is a collaborative effort that culls data from more than 100 ISPs, including British Telecom, Australian provider Netgen Networks and Indian provider Tata Communications. As part of their agreement with Arbor, all ISPs participating in the ATLAS system must share anonymous traffic data with one another on an hourly basis. Arbor recently upgraded its ATLAS system to monitor and collect real-time data for global Internet traffic, routing and application performance. Previously, the system had been used mostly to collect data on security-related traffic such as DDoS attack traffic.

Wednesday, August 12, 2009

Another Denial-of-Service Attack Takes Down Twitter

Already a victim last Thursday of a denial-of-service attack that left the site dark for more than three hours, Twitter on Tuesday morning experienced another 27 minutes of downtime that the company blamed on an attack of unknown origin.

According to Network World, Twitter also had minor outages Sunday and yesterday. Last week's attack, which was also felt by users of Facebook and LiveJournal, was attributed by a Facebook executive to efforts to silence a Georgian blogger.

These recent attacks may not impact the enterprise or most organizations that already block access to social networking sites, but they are a good reminder to make sure the systems in your network aren't contributing to the botnet army by ensuring that these systems aren't infected by drive-by malware or other viruses on websites that your end-users visit. Keep that forward proxy's URL database and malware scanner up to date.

Tuesday, August 11, 2009

Social networking in the workplace: Here to stay

A big worry among IT admins and execs has recently been the impact of social networking on the workplace. With malware outbreaks like the Koobface virus, there's no doubt as to why that is. Unfortunately, the more IT admins try to prevent the use of social networking in the workplace, it's likely end-users will find ways around it. Twitter is a great example of a new application that has Facebook ties for example.

Network World published this an article discussing that social networking in the workplace is here to stay. They also included a pointer to 12 tips for safer social networking. While these are mostly end-user tips, it's still a good idea for administrators to keep on top of what's happening as well. And if you're going to allow access to social networking to remember to use up to date URL databases for malware, along with a good anti-malware scanning package on the forward proxy in your network.

They do also point out one very important tip for companies and organizations, and that's to have an Acceptable Use Policy (AUP), that explains what's allowed, not allowed, and what defines crossing a boundary in using the Internet at work.

Friday, August 7, 2009

Hacker attack takes down Twitter, Facebook, LiveJournal

The big news yesterday was an annoyance to many users of social networking. Twitter, Facebook and LiveJournal were overwhelmed Thursday morning by denial-of-service attacks disrupting access to an estimated 306 million users of the popular social networks.

It's believed thousands of infected home and workplace PCs, called bots, were instructed to flood the websites with nuisance requests, thus cutting off access to anyone else.

From USA Today's report on the outage:
Security experts can't say if the attacks were related. Twitter's 35 million users around the globe could not Tweet at all for at least three hours.
...
Access was restored in much of the U.S. by 1 p.m. Eastern time, but Twitter could not be reached via iPhone or in Eastern Europe through much of the day, says Stephan Tanase, a senior anlayst at Kaspersky Lab. "This was definitely a pretty heavy attack," says Tanase.

Facebook reported degraded service for some of its 250 million users, while LiveJournals says its 21 million users were cut off for an hour.
...
Roger Thompson, a senior researcher at antivirus company AV, says a vigilante may have been trying to "get the attention of the world on the botnet problem." Estimates vary, but some 40% of Internet-connected computers may be under the control of criminals who can easily use them for a variety of criminal pursuits.

By shutting down Twitter, the attacker may have been trying to show how powerful bot networks can be in the hands of criminals, says Thompson.

Another possible explanation: the denial of service attacks were meant to misdirect security teams. IBM recently helped a corporate client tighten security after a denial of service attack. Investigators learned that as the company scrambled to block the bot network bombarding it with nuisance requests, the attackers used a different botnet to steal data.

"It's like jingling your keys at a baby so they don't pay attention to what you're really doing," says Dan Holden, a manager at IBM's X-Force Research Lab.


Seeing how strong this botnet army has gotten, is a good reminder to make sure none of the systems under your jurisdiction are part of this army. Part of that protection is of course a good proxy architecture, that protects your users from getting infected in the first place.

Thursday, August 6, 2009

Web Surfers Forced to Choose Security or Anonymity

I found an interesting news piece on a Google service that helps protect Internet surfers from malicious sites. Google Safe, is supposed to help protect you from malware by letting you know when you're about to enter a dangerous site. But at the same time it records your IP address and leaves a cookie behind.

Because of that cookie, it knows when you're using an anonymous proxy and when you're not, and can tell you are coming from different IP addresses.

In essence it's gathering data about browsing activities that users are trying to keep secret, a researcher told attendees at the Black Hat security conference last week in Las Vegas.

PC World explains it this way:

Google Safe, a database service that warns Internet users when they are about to enter infected pages, marks browsers so the users can be identified even if they proxy all their traffic through another IP address, says Robert Hansen, CEO of Internet security firm SecTheory. "It's a privacy-security tradeoff," Hansen says.

Firefox and Chrome browsers are both susceptible to the problem, he says. Others may be as well, but Hansen hasn't tested them.

Browsers routinely connect to Google Safe as often as 30 times per hour to download updated lists of sites Google has found to be dangerous. When users attempt to connect to these sites, the browsers display a warning that they are potentially unsafe so users can avoid them.

These same users might also want to mask their Internet activity by directing their traffic through proxy sites, but Google gathers data that reveals the actual machine, Hansen says.

When browsers connect to Google Safe, the service leaves a cookie in the browser. If a user subsequently turns on an anonymizing proxy, Google will have a record of that cookie resolving to two different IP addresses – its actual address and the proxy address, Hansen says.

So the user will expect to thwart anyone trying to find out where their traffic comes from, but Google's logs would associate the proxy address with the user, he says. "Google knows you have two IP addresses associated with that cookie," he says. "They can correlate it, but the question is, are they doing it?"

To remain anonymous, users can turn off the auto-update feature in their browser that gathers fresh unsafe URLs from Google Safe, but that is a bad idea, too. "It protects you from malware and phishing sites. It's really important to the public. That's why it exists in the first place," Hansen says.'

The Chrome browser gathers more identifying information – a hash of the machine ID and of the user ID, he says. That means proxied traffic can be traced to not only a particular IP address but also an individual machine at that address. Investigators would have to enter the machine ID and user ID into the browser, have the browser hash it, and match the results with the hashes logged with Google Safe to identify a suspect machine, he says.

How far back an individual's Internet activity could be tracked depends on how long Google Safe maintains its logs, he says.


Just another reminder, there's not much you can do today in true privacy, even if you decide to use an anonymous proxy.

Wednesday, August 5, 2009

Marines ban Facebook, MySpace amid Pentagon study

Social Networking is one of those gray areas, where IT admins, CIOs and CTOs still haven't yet decided if they are good or bad for business. There's some positive elements to them (good marketing campaigns can take advantage of them), but there's also negatives.

The Marine Corps this week, decided the bad outweights the good, and is banning its troops from going onto Facebook, MySpace and other social network sites, citing a possible security risk.

From the news report:

The corps issued an order Monday saying the ban is effective immediately on the Marine Corps computer network. It said the Internet has been used as a haven for malicious behavior, and said that using social sites can expose information to adversaries.

The Associated Press has learned that the move is part of a larger Pentagon review. Deputy Defense Secretary William Lynn last week ordered a review of both the threats and benefits of using social networking — and asked it be done by the end of the month.

The order doesn't effect Marines' private use of such networks on personal computers outside of their jobs.


This recent move should remind IT admins to look at their policy around social networking and make a decision one way or the other.

Tuesday, August 4, 2009

Novell aims to tighten cloud security

We've talked in the past about how some security features in proxies may be moving to the cloud (and about how some are already in the cloud). In the cloud computing has also brought about security concerns, and rightly so. ZDNet reported on a new offering from Novell to address some of these concerns last week:

Novell has unveiled a cloud-computing identity and access management service, designed to extend corporate security policies to hosted facilities.

On Wednesday, Novell demonstrated an advanced prototype of its Cloud Security Service, which is due for release to enterprise organisations as a product early in 2010.

Novell said the new service, which has been in a private joint-development phase with hosting partners, is based on existing components used in its Access Manager, Sentinel and Identity Manager products.

"We have a prototype, but we haven't had to build a brand-new product. It's been more of a repackaging and adding of features and functions to existing technologies, and then building those out as a cloud-computing service," Markus Krauss, vice president of identity and access management, EMEA, told ZDNet UK.

"Most of the connectivity is already there in our standard products, but now we combine them differently and enhance their functionality to be more cloud-specific," he added.

Based on more than 60 cloud patents and patent applications, the service uses proxy technology to avoid exposing critical information, according to Novell. It also supports a number of industry standards used in public and private clouds.

Krauss said the Cloud Security Service product comprises enterprise connectors to annex part of the cloud under existing security controls, a broker to provide a secure bridge, identity connectors to control user access and roles, and event-tracking connectors to report on what is happening in the cloud.

"If you have governance, risk-management and compliance activity in your organisation, the cloud becomes absolutely seamless for you from a policy point of view — because, through the connectors, we integrate the cloud as part of your standard infrastructure. It becomes fully transparent," he said.


It's interesting that Novell uses proxy technology to accomplish their security in the cloud, which shows more than ever proxies are an important component in the network infrastructure.