Welcome to the Proxy Update, your source of news and information on Proxies and their role in network security.

Friday, October 30, 2009

The Curse of Cloud Security

Lately, there's been a lot of talk around the new buzz words, "cloud computing". We've even discussed the issue here on this blog in past articles. The immediate benefits of cloud computing are obvious by letting you simplify your physical IT infrastructure and cut overhead costs. But the problem that seems to keep haunting us is we've only started to see the all of security risks involved.

Network World tackles this topic this week and says:

Putting more of your infrastructure in the cloud has left you vulnerable to hackers who have redoubled efforts to launch denial-of-service attacks against the likes of Google, Yahoo and other Internet-based service providers. A massive Google outage earlier this year illustrates the kind of disruptions cloud-dependent businesses can suffer.

That's one of the big takeaways from the seventh-annual Global Information Security survey, which CSO and CIO magazines conducted with PricewaterhouseCoopers earlier this year. Some 7,200 business and technology executives worldwide responded from a variety of industries, including government, health care, financial services and retail.

Given the expense to maintain a physical IT infrastructure, the thought of replacing server rooms and haphazardly configured appliances with cloud services is simply too hard for many companies to resist. But rushing into the cloud without a security strategy is a recipe for risk. According to the survey, 43 percent of respondents are using cloud services such as software as a service or infrastructure as a service. Even more are investing in the virtualization technology that helps to enable cloud computing. Sixty-seven percent of respondents say they now use server, storage and other forms of IT asset virtualization. Among them, 48 percent actually believe their information security has improved, while 42 percent say their security is at about the same level. Only 10 percent say virtualization has created more security holes.

Security may well have improved for some, but experts like Chris Hoff, director of cloud and virtualization solutions at Cisco Systems, believe that both consumers and providers need to ensure they understand the risks associated with the technical, operational and organizational changes these technologies bring to bear.


The article is a good reminder to make sure we have all of our ducks in a row if you're going to consider cloud computing. Network World asks the difficult question:

When it went down, many companies that have come to rely on its cloud-based business applications (such as e-mail) were dead in the water. ...
"What if you have a breach and you need to leave the cloud? Can you get out if you have to?"

Thursday, October 29, 2009

Cyber Security Awareness

The Open Systems Journal web site is highlighting Cyber Security Awareness this month, and Day 25 of their efforts focused on security through port 80 and 443, the ones used for web traffic.

As they remind us:

Port 80 and 443 are ports generally associated with the Internet. Port 443/HTTPS is the HTTP protocol over TLS/SSL. Port 80/HTTP is the World Wide Web. Let’s face it, port 80/443 are generally a given for being open on any type of filtering device allowing traffic outbound on your network. If web servers are being hosted, connections will be allowed inbound to those web servers. They are also two ports that pose a significant threat(s) to your network.

One reason for such a threat, is the very fact that we just mentioned: everyone generally associates it with the Internet and web traffic and its usually open. Sadly, it doesn’t get watched that closely. I have heard the statement many times its just people surfing the web and we ignore it cause there is too much traffic. The sad reality is that more often than not, the threat will come from people on your network surfing the web. The rise in browser based attacks is staggering to say the least.

For those that do want to watch it close, that poses a challenge as well. How do you filter? What do you filter? How do you do analysis on the traffic? Let me pose a example to you. I looked at a piece of malware about three years ago that used base64 encoded html comments, on a very benign web page, to pass commands. How do you detect that? Some software automatically defaults to port 80 if the primary port is available.

The above two threats applies to port 80 and 443 traffic. Now, let’s just focus on 443 for a minute. It’s encrypted traffic which means you can’t read it. So what do you do? Unless you have a proxy on your network where you can inspect the traffic at that point or run a host based IDS etc., your other network tools are blind to what is there.


Any serious IT admin concerned about security, should already have the aforementioned proxy in their network. It provides a way to make sure only true http traffic is passed through these ports (you can choose to block anything that's not actually http). And even if you don't do this (you'd be surprised how many applications you break if you do), at least you'll have a record of all the traffic going through port 80 and 443, for later audits. But seriously consider controlling more of the traffic through port 80 and 443. The proxy will definitely help you do this.

Wednesday, October 28, 2009

Cookies sound sweet, but they can be risky

USA TODAY ran a story this week with the above title. Catchy for the typical reader, but has so much more meaning when you're an IT manager. For the uninitiated, everywhere you go on the Internet, you leave behind small footprints called cookies.

From the USA Today article:

Cookies track where you have gone online and are stored on your hard drive. The websites you visit tap into those cookies so they can tailor promotions to you or retrieve data such as your credit card information. Every site you visit also registers your numerical IP (Internet protocol) address and can track information associated with it. Your IP address contains information like your hometown, but not your name.

Cookies come in two types: first- and third-party. First-party cookies are kept only by the site you visit and any affiliated properties, such as the company's Facebook fan page. This information is not shared with other websites and is generally not considered worrisome. Third-party cookies are those shared across various websites; for example, if you click on certain ads or search for a car on sites that share such cookies, your information goes to a far larger audience.


USA Today does offer some advice to protect yourself when browsing the web:

•Check website privacy policies. Most sites state what information is gathered and how it is used. Some will let you opt in or opt out of the collection process. Check the policy especially if you plan to register on a site.

•Disable cookies. Onyour Web browser, you likely have an option to disable all cookies or those that apply to third-party uses. Disabling first-party cookies means websites won't likely have your credit card or password information stored anymore. Greve has disabled third-party cookies on her computer and "sleeps better at night" because of it, she says.

•Remove cookies regularly. You can set your browser to automatically clear your entire browsing history and cookies, or do it manually. But Greve says even though cookies are removed from the computer, "Once you put your information out, it's out there, and it's going to get to stores in one way, shape or form."

•Consider installing an "anonymizer." These services hide your IP address wherever you go, but Greve warns there have been "phishing" attacks — e-mails that try to get personal information — through some of these.

•Use a proxy server. These devices, which are intermediaries between networks, allow you to browse in private.


Of course that last recommendation is one I heartily endorse. Anyone managing a network should consider putting a proxy server to help protect the end-users browsing the web. In addition make sure that proxy server is up to date on its URL database, real time categorization, and malware scanning software.

Tuesday, October 27, 2009

Cisco to Acquire SaaS Web Security Leader ScanSafe

This morning, Cisco announced its intention to purchase ScanSafe, a provider of SaaS Web Security. It's another announcement in a string of acquisitions in the web security space, the most recent was Barracuda's announcement of their intention to purchase PureWire, another SaaS Web Security provider.

ScanSafe is based in London and San Francisco, and its Web security solutions are targeted at organizations ranging from global enterprises to small businesses.

From the announcement:
"With the acquisition of ScanSafe, Cisco is executing on our vision to build a borderless network security architecture that combines network and cloud-based services for advanced security enforcement," said Tom Gillis, vice president and general manager of Cisco's Security Technology Business Unit (STBU). "Cisco will provide customers the flexibility to choose the deployment model that best suits their organization and deliver anytime, anywhere protection against Web-based threats."

Web security is a large and expanding market expected to grow to $2.3 billion by 2012. By acquiring ScanSafe, Cisco is building on its successful acquisition of leading on-premise content security provider IronPort. The acquisition brings together the Cisco IronPort(TM) high-performance Web security appliance and ScanSafe's leading SaaS Web security service. This combination will expand Cisco's security portfolio to offer superior on-premise, hosted, and hybrid-hosted Web security solutions.

"ScanSafe pioneered the market for SaaS Web security and continues as a leader in this rapidly growing market," said ScanSafe CEO Eldar Tuvey. "At a time when enterprises are increasingly focused on a flexible and mobile workplace, the need for hybrid-hosted Web security solutions is greater than ever. By joining the Cisco team we will be able to offer even better and more flexible protection to our customers."

ScanSafe's service will be integrated with Cisco® AnyConnect VPN Client, the newest virtual private network (VPN) product from Cisco, to provide the industry's leading secure mobility solution. In addition, ScanSafe's global network of carrier-grade data centers and multi-tenant architecture will further enhance Cisco's ability to provide new cloud-security services for customers anywhere in the world.

Upon the close of the acquisition, the ScanSafe team will become part of Cisco's STBU, reporting to Gillis.

The ScanSafe acquisition demonstrates Cisco's commitment to security and its ability to use its financial strength to quickly capture key market transitions through its build, buy, and partner strategy. Under the terms of the agreement, Cisco will pay approximately $183 million in cash and retention-based incentives. The acquisition is subject to various standard closing conditions and is expected to close in the second quarter of Cisco's fiscal year 2010.


There's definitely more interest lately in Web Security, and I think you'll only see more in the acquisition arena, in addition to new offerings from various vendors. With malware being as prevalent in web pages as in email, this trend can only continue.

One Phishing Gang Dominates Attacks

Both PC World and Network World reported this week on a report released by the Anti-Phishing Working Group (APWG). According to the APWG, a single group of attackers accounted for a quarter of all phishing in the first half of this year.

The group goes by the name Avalanche, and started work late last year and has been increasing its activity since. "This criminal operation is one of the most sophisticated and damaging on the Internet and targets vulnerable or non-responsive registrars and registries," the APWG report says.

From the PC World article:

The group attacks financial institutions, online services and job-search providers using fast-flux techniques that hide its actual attack sites behind an ever-changing group of proxy machines, mainly hacked consumer computers, according to APWG's latest Global Phishing Survey.

Rather than dying out after efforts to take down the Avalanche efforts, the gang seems to be increasing its efforts. "Avalanche attacks increased significantly in the third quarter of the year, and preliminary numbers indicate a possible doubling of attacks in the summer of 2009," the report says. The report period ends July 1, so the next report for the second half of this year will examine the apparent surge in detail.

Because the IP addresses that the attacks seem to be coming from are constantly shifting, notifying ISPs of the problem doesn't work. By the time the ISPs shut down the IP addresses the attack proxies have moved somewhere else, the report says.

The Avalanche gang registers domains at one to three registries or resellers and test whether the registrars notice that they are registering domain names that are nearly identical. If not, they launch attacks from these domains, and if the registrar takes action against them, they just abandon the domains and move on.

An example of these similar domains is given in the report: 11fjfhi.com, 11fjhj.com, 11fjfh1.com, 11 fjfhl.com. Each domain is used to launch up to 30 attacks, APWG says.

Avalanche attacks just one or two businesses at a time and frequently cycles back to re-attack older targets, the report says.

Because mitigation efforts by ISPs and others focused on Avalanche, the average lifetime of each Avalanche attack was significantly lower than the average for all attacks, the report says. The average uptime for all attacks was 39 hours, 11 minutes; for Avalanche attacks, it was 18 hours, 45 minutes, the study says.

APWG researchers consider an attack dead if it stays inactive for an hour. These attacks could be started up again after an hour, which would extend their longevity but would not be measured by the report, the researchers say. So the lifespan of Avalanche attacks may be longer than the report results indicate.

Malicious Domains Increase

In other study results, it appears that using hacked domains as launch pads for attacks is increasing. Some 14.5% of phishing attacks came from what APWG called malicious domains registered by phishers themselves. That is down from 18.5% in the second half of last year, the period for the group's previous Global Phishing Survey. "Virtually all the rest were hacked or "compromised" domains belonging to innocent site owners," the study says.

Of the malicious domains, 43% were launchpads for the Avalanche attack.

Two top level domains - .pe (Peru) and .th (Thailand) – score highest in a measure of how many second and third level domains within them are used to launch phishing attacks. The average score across all domains was 6.9, and .pe scored 20 while .th scored 16.

Overall, attacks came from 30,131 domains distributed among 171 top level domains. Half (50.3%) of these domains fell within the .com top level domain, 8.5% within .net and 5.6 within .org. The next three most often used top level domains were .eu, .ru and .de, all with less than 3%.


This new report from the APWG, reminds us to make sure we've got some sort of protection against phishing sites in our proxy deployment. The short-lived nature of these domains, also makes it important to not rely solely on URL databases, but also some type of real time categorization.

Monday, October 26, 2009

Geocities set to close today

It's a sad day for many of us who've been around on the Internet for a long time. Geocities is set to close today. Started in 1994, it was the place for the masses to have their own website. I remember getting my own page and email address at geocities in 1995. I was lucky and early enough to get a short email address with just my name at the time.

Somehow, Yahoo never figured out how to make any money from Geocities, even though the network is still among the top 200 most-trafficked sites on the Internet, according to metrics tracker Alexa. That alone should have given Yahoo some reason to keep the site and try and revive it's usage, at least for ad content revenue.

But alas, that's not the case. But for those of you still looking for Geocities content, you may be relieved some it will be saved at the Archive Team, where they've been busily rushing to save pages, before today's deadline.

Friday, October 23, 2009

Schwarzenegger denies consumers knowledge of their own stolen data

I might have missed this bit of news if it hadn't been for a blog over at Sophos, the anti-virus provider. Apparently last week, California Governor Arnold Schwarzenegger vetoed senate bill SB-20. The bill would have required businesses to inform consumers of what data about them was lost during a breach, inform the California Attorney General if more than 500 records were lost and provide advice to consumers on how to protect themselves from their data being exploited. It was passed by both the California Legislature and Senate without opposition.

More from Chester Wiesniewski's blog:

The authors of the bill had worked closely with the insurance industry and other related parties to strike the right balance between protecting consumers and not placing an undue burden on businesses. Arnold disagrees, and claims to be looking out for businesses, yet those businesses had already dropped opposition to the legislation.

The Governator and I clearly don't see eye to eye on this one. I had my debit card "skimmed" a year ago from a local Automatic Teller Machine (ATM) in Vancouver. My bank dutifully notified me and asked me to come in for a replacement card. While speaking with the clerk at my local branch to retrieve my new card, I asked "Which ATM was it where my card was compromised, or was it a shop?" The response was "We don't disclose those details to customers."

Why not? I certainly do not want to make the mistake of returning to a merchant who may have been in on the scam. Consumers who are made aware of data loss have a right to know what personal information may have been obtained about them so they can protect themselves in the future.


I agree with Mr. Wiesniewski, this should have been something the Governor signed, and I also agree it's surprising that Governor vetoed this. With the widespread identity theft in the world today, you'd think this one would have been a no-brainer.

Thursday, October 22, 2009

Using Reverse Proxies for Front Ending Exchange

The Microsoft Exchange Team Blog wrote this week on the topic of Exchange 2010 (and 2007) Client Access Servers in the perimeter network, similar to the way "FE" (front end) servers are placed for Exchange 2000/2003. Their recommendation? Don't do it.

Instead the recommendation is to use reverse proxies. Their explanation:
Reverse Proxies are built to be put in the perimeter network or at the edge of the network. They include many security features and flexibility for customers to determine the level of defense-in-depth which is right in any particular environment.


If Microsoft recommended FE servers to be in the perimeter network for 2000/2003, what are the other reasons they've changed their stance for Exchange 2007 and 2010? Here's some of the more detailed rationale:

The E2000/E2003 FE servers were there to authenticate users and proxy traffic to the BE server where the traffic was actually interpreted and responded to. For example, the FE servers in E2000/E2003 don't do any Outlook Web Access (OWA) rendering. That all takes place on the BE servers.

The E2007/E2010 CAS role on the other hand contains all middle-tier logic and rendering code for processes like OWA, Exchange ActiveSync (EAS), Exchange Web Services (EWS), and more.


It looks like Microsoft is coming around to what we've known here all along, which is the proxy is still the best solution for securing web traffic coming into and out of the organization.

(Side note: I love the title of their blog "You had me at EHLO" - as a former postmaster, I can really appreciate it.)

Wednesday, October 21, 2009

Keeping an Eye on Multimedia Application Use

Network World reported this week on Blue Coat's new offerings for monitoring Multimedia Application use in the workplace. IT admins have routinely blocked or allowed video traffic based on corporate requirements and policies. There was a time you could easily claim video traffic from sites like youtube were recreational in nature and block it. Lately, though, there’s more and more legitimate business content, such as training videos, on YouTube, which makes it harder for IT admins and HR groups to make a blanket policy decision on what's allowed in the workplace.

Blue Coat Systems in their latest update, is trying to give its customers greater flexibility when it comes to application and bandwidth policies. By upgrading to Blue Coat’s newest version of their URL filtering software, called WebFilter, IT managers will have more granular control over Web-based multimedia applications and greater protection against Web-based threats. Ten new categories are now available in WebFilter, including six network usage-related and four security-related categories.

From Network World:

The goal is to give enterprises the tools to see how employees are really using Web-based applications and content, and then apply policies that don’t get in the way of important business activities, says Steve House, Director of Product Marketing at Blue Coat. “We’re getting more granular in our ability to understand Web traffic. More and more traffic is moving to the Web, including business traffic and recreational traffic,” House says. “We have really invested in understanding that and are using that information to make better decisions around how the network is utilized and how secure the environment might really be.”

On the multimedia front, Blue Coat added six new categories to WebFilter: media sharing, art/culture, Internet telephony, network errors, TV/video streams and radio/audio streams. With these new categories, companies can distinguish between the different types of multimedia applications so they don’t adopt inflexible traffic policies that limit productivity.

Another new feature is the ability to differentiate long radio and video streams from short streams that are less than 15 minutes. This lets a business allow shorter audio/video clips during normal business hours but only allow bandwidth consuming TV/video streams after business hours, for instance.

“There’s a very big difference between someone watching a two-minute training video versus someone going to Hulu and watching a TV show or two-hour movie,” House says. “Those things can definitely consume massive amounts of resources and are much more of a productivity drain.”

In addition, Blue Coat WebFilter can now assign URLs to up to four categories. For example, an online news publication could be classified in the news category, while the sports section of that publication could be classified under both the news and sports categories so a company could decide to restrict access to the sports section without blocking access to the entire news site.

On the security front, Blue Coat added four new categories designed to better filter Web-based threats associated with unwanted software, online meetings, translation sites and greeting cards.

“For a long time we’ve had the ability to block malware, but now we separately categorize the sites that are trying to instruct botnet-controlled computers, ones that have been infected,” House says. “If you can look and see [which computers] they’re trying to talk to, you can not only block it but also run a report to see who has been infected and turn that over to the IT group.”


Just a good reminder to keep our proxy software up to date, so we can take advantage of new features to make developing enterprise policy on the proxy easier.

Tuesday, October 20, 2009

Who uses the net and when?

Network World recently published an article on web usage patterns in the U.S. and in Europe. Not surprisingly the peak traffic on the Internet in each of these areas is not during working hours. What was surprising is that traffic peaks around 11 PM in the U.S. and much earlier at 7 PM in Europe.

The two driving factors in all this Internet usage? Games and video. For games, it's specifically World of Warcraft and Steam, and video traffic is primarily from youtube and adult sites. As Network World says "So, in a sense you could say that what’s keeping Internet users up at night is sex and violence."

Monday, October 19, 2009

Kaspersky CEO Calls for End to Internet Anonymity

In an interview with ZDNet this week, Eugene Kaspersky, the CEO of Kaspersky, Russia's No. 1 anti-virus package has said that the internet's biggest security vulnerability is anonymity, calling for mandatory internet passports that would work much like driver licenses do in the offline world. Kaspersky also proposed the formation of an internet police body that would require users everywhere to be uniquely identified.

From an article in the The Register on Kaspersky's controversial comments:

"Everyone should and must have an identification, or internet passport," he was quoted as saying. "The internet was designed not for public use, but for American scientists and the US military. Then it was introduced to the public and it was wrong...to introduce it in the same way."

Kaspersky, whose comments are raising the eyebrows of some civil liberties advocates, went on to say such a system shouldn't be voluntary.

"I'd like to change the design of the internet by introducing regulation - internet passports, internet police and international agreement - about following internet standards," he continued. "And if some countries don't agree with or don't pay attention to the agreement, just cut them off."

He rejected the notion that internet protocol numbers were sufficient for tracking a user, arguing they are too easy to come by.

"You're not sure who exactly has the connection," he explained. "Even if the IP address is traced to an internet cafe, they will not know who the customer or person is behind the attacks. Think about cars - you have plates on cars, but you also have driver licenses."

Kaspersky admitted such a system would be hard to put in place because of the cost and difficulty of reaching international agreements. But remarkably, his interview transcript spends no time contemplating the inevitable downsides that would come in a world where internet anonymity is a thing of the past.

"You could make the same argument about the offline world," said Matt Zimmerman, a senior staff attorney at the Electronic Frontier Foundation. "You know, every purchase you make should be tracked, we should ban the use of cash, we should put cameras up everywhere because in that massive data collection something might be collected to help someone. But we think privacy is an important enough countervailing value that we should prevent that."

In Kaspersky's world, services such as Psiphon and The Onion Router (Tor) - which are legitimately used by Chinese dissidents and Google users alike to shield personally identifiable information - would no longer be legal. Or at least they'd have to be redesigned from the ground up to give police the ability to surveil them. That's not the kind of world many law-abiding citizens would feel comfortable inhabiting.

And aside from the disturbing big-brother scenario, there are the problematic logistics of requiring every internet user anywhere in the world to connect using an internationally approved device that authenticates his unique identity. There's no telling how many innovations might be squashed under a system like that.

No doubt, the cybercriminals that Kaspersky has valiantly fought for more than a decade are only getting better at finding ways to exploit weaknesses in internet technologies increasingly at the heart of the way we shop, socialize and work. But to paraphrase Benjamin Franklin, those who sacrifice net liberty for incremental increases in security no doubt will get neither.


This of course leaves the question, how much control are we willing to live with in order to stop cyber crime. It's not an easy answer, and the solution probably lies somewhere in the middle of the spectrum.

Thursday, October 15, 2009

Researchers advise cyber self defense in the cloud

Network World reported this week that Security researchers are warning that Web-based applications are increasing the risk of identity theft or losing personal data more than ever before. The best defense against data theft, malware and viruses in the cloud is self defense, researchers at the Hack In The Box (HITB) security conference said. The difficulty of course is in getting people to change how they use the Internet, such as what personal data they make public.

From Network World:

People put a lot of personal information on the Web, and that can be used for an attacker's financial gain. From social-networking sites such as MySpace and Facebook to the mini-blogging service Twitter and other blog sites like Wordpress, people are putting photos, resumes, personal diaries and other information in the cloud. Some people don't even bother to read the fine print in agreements that allow them onto a site, even though some agreements clearly state that anything posted becomes the property of the site itself.

The loss of personal data by Sidekick smartphone users over the weekend, including contacts, calendar entries, photographs and other personal information, serves as another example of the potential pitfalls of trusting the Cloud. Danger, the Microsoft subsidiary that stores Sidekick data, said a service disruption almost certainly means user data has been lost for good.

Access to personal data on the cloud from just about anywhere on a variety of devices, from smartphones and laptops to home PCs, shows another major vulnerability because other people may be able to find that data, too.

"As an attacker, you should be licking your lips," said Haroon Meer, a researcher at Sensepost, a South African security company that has focused on Web applications for the past six years. "If all data is accessible from anywhere, then the perimeter disappears. It makes hacking like hacking in the movies."

A person who wants to steal personal information is usually looking for financial gain, Meer said, and every bit of data they can find leads them one step closer to your online bank, credit card or brokerage accounts.

First, they might find your name. Next, they discover your job and a small profile of you online that offers further background information such as what school you graduated from and where you were born. They keep digging until they have a detailed account of you, complete with your date of birth and mother's maiden name for those pesky security questions, and perhaps some family photos for good measure. With enough data they could make false identification cards and take out loans under your name.

Identity theft could also be an inside job. Employees at big companies that host e-mail services have physical access to e-mail accounts. "How do you know nobody's reading it? Do you keep confirmation e-mails and passwords there? You shouldn't," said Meer. "In the cloud, people are trusting their information to systems they have no control over."

Browser makers can play a role in making the cloud safer for people, but their effectiveness is limited by user habits. A browser, for example, may scan a download for viruses, but it still gives the user the choice of whether or not to download. Most security functions on a browser are a choice.

Lucas Adamski, security underlord (that's really what his business card says) at Mozilla, maker of the popular Firefox browser, offered several bits of cyber self defense advice for users, starting with the admonition that people rely on firewalls and anti-virus programs too much.

"You can't buy security in a box," he said. "The way to be as secure as possible is about user behavior."

There is a lot of good built-in security already installed in browsers, he said. If you get a warning not to go to a site, don't go to it. When you do visit a site, make sure it's the right one. Are the images and logos right? Is the URL correct? Check before you proceed with filling in your username and password, he counseled.

Software updates are vital. "Make sure you have the most up-to-date version of whatever software you use," he said. Updates almost always patch security holes. Key software programs such as Adobe Systems' Flash Player and Reader are particularly important to keep updated because they're used on so many computers and are prime targets for hackers.

He also suggested creating a virtual machine on your computer using VMWare as a security measure.

"It's really hard to get people to change their browsing habits," he said. People want to surf the Web fast, visit their favorite sites and download whatever they want without thinking too much about security. "Educate them, move them along, but don't expect them to become security experts."

Internet browser makers take great care in building as much security as possible into their products and putting them through rigorous testing.

The security team for Google's Chrome browser, for example, will take the first crack at any major update to the software, hacking away to find vulnerabilities or ways to improve security, said Chris Evans, an information security engineer at Google.

After the Chrome security team takes a whack at the software and it is reworked to fix the holes they found, other security teams at Google will have a go at the product to see what trouble they can cause. Finally, the software is released in beta form, and private security researchers and others can hack away. Any problems are fixed before the final release goes out and then the Chrome team stands ready to make new patches for any other security issues that crop up.

Despite all the testing, browser makers are only one part of the security solution because they have no control over Web software or user browsing behavior.

The cloud is the Wild West: hackers and malware makers abound, phishers seek passwords and users do whatever they want to, recklessly surfing and downloading potentially dangerous content as judged by security researchers.

Companies developing Cloud applications and services will need to do more for Web security. Amazon.com with its Web Services and Google as it moves forward with initiatives, such as Google Docs, that attempt to draw people to Web applications and away from computer applications will need to work more closely with security researchers, Meer said.

And Google's work on the security in the Chrome browser highlights the reason why: Computer applications such as Chrome face intense scrutiny by security researchers throughout the Web, while Web applications do not.

"Reverse engineering keeps [big software companies] honest," said Meer. "If they hide something in the software code, sooner or later someone finds it. With Cloud services, you just don't know because we simply cannot verify it."

Cloud applications are built by one company, and nobody is looking at the code or how safe it is, said Meer. Applications for computers are different. They can be ripped apart by security experts then put back together stronger so there are no security holes, he said.

"Trust but verify," said Meer. "Just because a guy does no evil today, we cannot trust that they will do no evil tomorrow because we simply cannot verify it."


Articles like these from Network World remind us why we have proxies in place in our networks. While they won't prevent all problems and threats, they are the first step in protecting web users from the new threats in the "Wild Wild Web"

Wednesday, October 14, 2009

From Sidekick to Gmail: A short history of cloud computing outages

Network World covered the recent Microsoft-T-Mobile-Sidekick data loss mess recently, and reminded us that it wasn't the first time data was lost in the cloud. While cloud computing remains the latest buzz word, this latest event is definitely enough to give pause to any IT administrator considering a move to the cloud.

Here's Network World's short history of cloud computing SNAFUs:

Microsoft Danger outage: Contacts, calendar entries, photographs and other personal information of T-Mobile Sidekick users looks to be lost for good following a service disruption at Sidekick provider Danger, a Microsoft subsidiary. The amount of data and number of users affected wasn't disclosed by Microsoft or T-Mobile, but Sidekick support forums were buzzing with pleas from users looking for tips on how to restore their devices or get their data back.

Google Gmail fails…again: When Google's Gmail faltered on Sept. 24, it wasn't down for more than a couple of hours, but it was the second outage during the month and the latest in a disturbing string of outages for Google's cloud-based offerings, including Google search, Google News and Google Apps over the past 18 months. Various explanations have been served up by the vendor, from routing errors to server maintenance issues. Some have come to Google's defense, saying that even though the company has had its share of outages, we are talking about mainly free services (you get what you pay for, in other words).

Twitter goes down…and yes, that's news: While Twitter had been keeping its Fail Whale in hiding more often than not, a big Twitter outage that lasted throughout the morning and into early afternoon in early August had social networking types fuming. A denial-of-service attack was blamed for the problem.

eBay's PayPal crashes: The PayPal online payments system failed a couple of times in August, leaving millions of customers unable to complete transactions. A network hardware issue was fingered as the culprit for the outage, which lasted for between 1 and 4.5 hours, depending on how you look at it. It cost PayPal millions of dollars in lost business; it's unclear how much it cost merchants.

Rackspace pays up: Rackspace was forced to pay out between $2.5 million and $3.5 million in service credits to customers in the wake of a power outage that hit its Dallas data center in late June. Rackspace, which offers a variety of hosting and cloud services for enterprise customers, suffered power generator failures on June 29 that caused customer servers to go down for part of the day. More disruptions followed and Rackspace kept customers up to date via its blog.

Windows Azure test release goes down: Early adopters of Microsoft's cloud-computing network Windows Azure suffered an overnight outage over a weekend in mid-March during which their applications being hosted on the network weren't available. This was only a test release of Azure, so observers noted that this obviously wasn't as big a deal as a production service outage. Separately, Microsoft also suffered a Hotmail messaging system outage in March.

Salesforce.com kicks off the Year of the Cloud Outage: As CIO.com's Thomas Wailgum reported in January, Salesforce.com suffered a service disruption for about an hour on Jan. 6 due to a core network device failing because of memory allocation errors.

Amazon S3 storage service knocked out: We actually have to go back to summer of 2008 to find coverage of the last major Amazon S3 cloud network outage, which lasted for 7 to 8 hours and followed another outage earlier last year caused by too many authentication requests.


So the question remains as to whether cloud computing is mature enough for the enterprise market.

Tuesday, October 13, 2009

Barracuda snags Purewire in Web security play

It was announced today that security appliance maker Barracuda Networks has acquired Purewire, a Web security-as-a-service provider. The acquisition gives Barracuda the SaaS offering. Barracuda also reported that the deal provides some additions to its security researcher and threat detection capabilities.

Barracuda offers lower end e-mail, Internet, Web, and instant messaging protection in appliance form factors, much of it based on open-source software. Purewire launched its Trust Web reputation service earlier this year.

Monday, October 12, 2009

Cisco shines light on dark corners of the Web

Cisco announced last week the launch of software that shines light on potentially troublesome websites hidden in what the US computer security firm dubbed the "Dark Web." The idea behind Cisco IronPort Web Usage Controls is to identify content that has escaped detection by business IT managers and security applications because of its stealthy nature on the Internet. Cisco claims it can identify as much as 90% of this traffic.

The Dark Web (as Cisco calls it) has been formed largely as a result of tidal wave of Web pages triggered by Web 2.0 trends in user-generated content such as blogging and social networking.

According to the AFP:

Only 20 percent of the more than 45 billion websites in the world are reportedly categorized effectively enough to be used by filtering programs, leaving 80 percent of the Web in the dark.

Tests of Ironport Web Usage Controls reportedly identified 50 percent more off-limits websites than did previous-generation filtering software relying on website address lists.

"We are doing pretty well; there is room for improvement," Kennedy said. "You have to balance between catch rate and false-positive rate."

False positives are times when filtering software blocks access to websites that don't deserve to be off-limits by company standards.


Cisco's announcement is a good reminder that URL lists, while important should never be the only source of protection in the web proxy. In today's Web 2.0 world, you absolutely need some type of real time rating system to find "dark web" pages and protect your users from this content. The good news here, is there are proxy vendors who already provide this type of service, including now, Cisco.

Thursday, October 8, 2009

Hotmail passwords heisted by hackers

Sophos blogger Chester Wisniewski noted on his blog this week that over 10,000 usernames and passwords were publicly disclosed from users of hotmail.com, msn.com, and live.com email services. All of the accounts initially posted begin with the letter a or b, suggesting that this may be the tip of the iceberg.

From Sophos:

BBC News contacted Microsoft and was able to confirm the validity of the accounts that were released.

Microsoft has released a public statement saying their investigation determined the IDs were stolen through a phishing attack. Part of their statement said "As part of that investigation, we determined that this was not a breach of internal Microsoft data and initiated our standard process of working to help customers regain control of their accounts."

This raises the question of how many people fell victim to this attack, and is it still underway? I may not be able to answer these questions, but with over 10,000 accounts exposed from the first 2 letters of the alphabet the scope of this fraud could be very large. Users who have followed Graham Cluley's (from Sophos) advice about using separate passwords for each site they use will minimize their exposure to just Microsoft's online services.

Another question is what Microsoft means by "due to a phishing scheme". Was this another view your blocked MSN friends website, or was it a direct phish of an impostor Hotmail login page? SophosLabs blogged about these attacks early in September, and it seems likely this may be related.

Computer World reported that this may be a similar attack to the one that disclosed private emails of vice presidential candidate Sarah Palin during last years U.S. election. I find this to be highly improbable. To compromise 10,000 or more accounts in an apparently serial manner would not be practical by guessing security questions. It is far more likely an that users were duped into providing their passwords to a fraudulent website posing as Microsoft or an affiliate.

My recommendation for users of Microsoft's online services is to change your passwords immediately. You are better to be safe than sorry, and password rotation is something we are often too lazy to do. This is a great time to log into those Facebook, Twitter, Gmail, and Yahoo! accounts and do likewise as a simple best practice to prevent yourself from becoming a victim of habit.

Password rotation is not fun, but it is a great preventative to these types of disclosures.
If you are an IT administrator this would be a great time to remind your users to change their Microsoft Live!, MSN, and Hotmail passwords. Additionally, as always, be sure your anti-spam protection is current and educate your users about phishing and clicking links in email. Sophos Web Appliance customers have been protected against the MSN friends scam for some time now, however technology and education are always the best solution.


And of course there's the bit of security the IT admin can make sure is up to date, and that's the proxy and it's URL database, real-time rating system and malware scanning software.

Tuesday, October 6, 2009

Can you live without the web for a week?

How tightly embedded is the web in our everyday lives? Both from a business and a personal point of view and that going without the web makes life more difficult, as the web and all its aspects improve communication, reduce prices, saves time and keeps you in touch with information for business and pleasure.

Can you live without the web for a week? You can read a diary here of one man's attempt: www.ft.com/digitalbusiness

There is also a podcast and youtube video about this challenge:

- Podcast

- Youtube video:

Monday, October 5, 2009

PAC File Creation

For those of you that manage Proxies and are looking for a good guide to writing PAC files, there exists a website that covers almost everything you'd need:

http://www.returnproxy.com/proxypac/

In addition to tips and tricks, it also has some sample files for you to use, a way to test files, and troubleshooting tips.

Thursday, October 1, 2009

Who makes anonymous proxies and why?

We've talked about anonymous proxies on this blog in the past, and recently I came across an article that discussed why someone would want to host an anonymous proxy. I've attached the link above and included some of the relevant information below:

Anonymous proxies require a lot of bandwidth to host. This bandwidth costs money, sometimes quite a lot. So who is hosting these proxies, and who is footing the bill? A few proxies are hosted by technically-adept students, bypassing their school filters, and limiting the use to a select group of their peers. Frequently these types of proxy are hosted on a home broadband connection, but with a handful of users, that’s no problem. These are the only truly ‘free’ forms of proxy and they can also be pretty tricky to block – URL list-based filters will have difficult time trying to catch them!

Public web proxies on the other hand (the most common type) can eat their way through many gigabits of bandwidth. The cost of this is usually offset by placing pay per click adverts on the proxy page. Revenue is miniscule, but with many hits, it all adds up. Of course, the proxy owners have to advertise too – top proxy lists are one way of doing this, but sometimes legitimate ads are placed as well. Some software-based proxies charge a fee but the majority are free and don’t carry any ads. Since it is highly unlikely that the creators are magnanimously footing the hosting bills, these proxy services will undoubtedly be selling on browsing habits, injecting ads or unwanted text, and even pushing malware.

Many students who use anonymous proxies are also unaware of the risks to their own personal security and identity. Malicious proxy servers do exist and are capable of recording everything sent to the proxy, including unencrypted logins and passwords. Although some proxy networks claim to only use ‘safe’ servers, due to the ‘anonymous’ nature of these tools, proxy server safety is impossible to police. Students should be educated to understand that whenever they use a proxy, they risk someone “in the middle” reading their data.

Other tips to prevent proxy abuse:
•Educate teachers to recognise illicit surfing or proxy abuse and report it to the IT department
•Educate students about the danger of using proxies.
•Allow slightly more lenient filtering outside of core hours
•Make sure your AUP covers anonymous proxying and that both students and teachers are familiar with its content. Make it clear that proxy abuse can be tracked to individuals.