Welcome to the Proxy Update, your source of news and information on Proxies and their role in network security.

Friday, February 26, 2010

Web Gateway Deployment Methodologies - SPAN Port Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ve discussed these as we've explain each methodology over the last few days. Today's article is the last in series and covers SPAN port deployments, sometimes referred to as TCP reset.

SPAN Port Deployment

The last deployment methodology we’ll discuss is that of SPAN (Switched Port ANalyzer) port deployment. Sometimes this method is also called TCP Reset deployment, as it relies on TCP resets to implement the policy of the web gateway. A web gateway is deployed by attaching it to a SPAN port on a switch. (See Figure 4) Unlike the other three deployment methods, which process the web traffic and implement policy based on the network response the web gateway issues, a web gateway implemented in a SPAN port, implements policy by issuing a TCP reset to the client system to prevent it from completing the download of offending content..

SPAN Port Advantages

SPAN port deployments allow larger scale deployments because a monitoring mode deployment typically uses less resources than inline, explicit or transparent which must actively process traffic. SPAN port deployment is useful if you think your hardware might be undersized for your needs.

SPAN Port Disadvantages

One of the disadvantages to a SPAN port deployment on a switch, is that it does not see all the traffic. Corrupt network packets, packets below minimum size, and layer 1 and 2 errors are usually dropped by the switch. In addition, it’s possible a SPAN port can introduce network delays. The software architecture of low-end switches introduces delay by copying the spanned packets. Also, if the data is being aggregated through a gigabit port, a delay is introduced as the signal is converted from electrical to optical. Any network delay can be critical, since TCP reset is used to implement policy.

SPAN ports also have an issue when there is an overload of traffic, typically the port will drop packets, and there will be some data loss. In a high network load situation, most web gateways connected to a SPAN port will not be able to respond quickly enough to keep malware from spreading across a corporate network.

Recently a Network World article (Dec 7, 2009) discussed the TCP reset method used by web gateways to implement policy:

Too clever by half, perhaps –TCP RESET has several drawbacks.

First, a cyber attacker can cause a "self-inflicted DoS attack" by flooding your network with thousands of offending packets. The TCP RESET gateway responds by issuing two TCP RESETs for every offending packet it sees.

The TCP RESET approach is worthless against a cyber attacker who uses UDP to "phone home" the contents of your sensitive files.

The gateway has to be perfectly quick; it has to send the TCP RESET packets before the client (victim) has processed the final packet of malware.

Ergo – deep and thorough inspection of network traffic before it's allowed to flow to the client is the most effective way to stop malware.

… In other words, don't just wave at the malware as it goes by.
--Barry Nance, Network World, Dec 7, 2009


Conclusion

While there are four common deployment methodologies to choose from when implementing a secure web gateway, there’s really only three clear common choices for IT departments. The choice between inline, explicit and transparent, will have to be done based on the needs and resources of the organization and the IT department. While SPAN port deployment and TCP reset may seem like a reasonable solution, there are enough drawbacks that a serious web gateway deployment should avoid this methodology.

Thursday, February 25, 2010

Web Gateway Deployment Methodologies - Transparent Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ll discuss these as we explain each methodology over the next few days. We've already examined Inline and Explicit deployments. Today we'll look at Transparent deployments.

Transparent Deployment


Transparent Deployment allows a web gateway to be deployed in any network location that has connectivity (similar to explicit mode deployment), (See Figure 3) reducing the need for a configuration change to the network to implement. In addition, there’s no overhead of having to configure each end-user’s system, since the routing of HTTP and HTTPS traffic is typically done by the router or other network device. Transparent deployment is often used when an organization is too large for an inline deployment and does not want the added work and overhead needed for an explicit deployment. Most transparent deployments rely on Web Caching Communications Protocol (WCCP), a protocol supported by many network devices. Alternatively it’s also achieve Transparent Deployment using Policy Based Routing (PBR)

Transparent Deployment Advantages

The main advantages of deploying a web gateway in transparent mode, include: narrowing the amount of traffic processed by the proxy and the ability to more easily implement redundancy of the web gateway. In addition transparent deployment does not require changes to end-user systems.

Transparent Deployment Disadvantages

Transparent deployment does depend on the availability of either WCCP or PBR, and support for these by the web gateway. Typically support for these is available only on more sophisticated web gateways. Configuration can be trickier, as there typically needs to be compatibility between the supported versions of WCCP between the router and the web gateway. More in-depth network expertise is required to implement and deploy a transparent mode deployment, typically not a problem in larger organizations, but may be an issue for smaller organizations.

Tomorrow we'll look at SPAN port deployments

Wednesday, February 24, 2010

Web Gateway Deployment Methodologies - Explicit Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ll discuss these as we explain each methodology over the next few days. Yesterday we looked at Inline deployments and today, we'll examine Explicit deployments.


Explicit Deployment


Explicit Deployment is fairly common when a web gateway is deployed in a larger network; and the design of the network requires there be no single point of failure. Explicit deployment allows the web gateway to be located on the network in any location that is accessible by all end-users and the device itself has access to the internet. (See Figure 2) Explicit deployment is done through the use of an explicit definition in a web browser. To make this kind of deployment easier, an administrator can use PAC or WPAD files to distribute the setup information for the explicit proxy to the end-users browsers.

When using explicit deployment it is extremely important to have your firewall properly configured to prevent users from bypassing the proxy. The firewall needs to be configured to allow only the proxy to talk through the firewall using HTTP and HTTPS. All other hosts/ip addresses should be denied. In addition, all other ports need to be locked down to prevent end-users from setting up their own proxy internally that tries to go out to the internet via HTTP on a port other than the commonly used ones (80 and 443).

Explicit Mode Advantages

The main advantages of deploying a web gateway in explicit mode, include: narrowing the amount of traffic processed by the web gateway (you can limit traffic to only HTTP based traffic) and the ability to more easily implement redundancy for web gateways in your environment. Explicit mode deployment for an environment without an existing web gateway is also less disruptive to the network, as the web gateway can be placed anywhere in the network that is accessible by all end-users and the web gateway can reach the firewall to the internet.

Explicit Mode Disadvantages

The disadvantage of explicit mode deployment is typically around IT administrative overhead, as each end-users system needs a configuration change in order to work properly. While there is some reduction in this overhead with PAC and WPAD, any misconfigured end-user system will result in a helpdesk call and require a sysadmin to rectify the situation for the end-user. Explicit mode deployment also relies heavily on a properly configured network and firewall. Any hole in the network or firewall can be exploited by a knowledgeable end-user to bypass the web gateway as discussed earlier.

Tomorrow we'll look at Transparent deployments

Tuesday, February 23, 2010

Web Gateway Deployment Methodologies - Inline Deployment

In today’s complex network architectures, sometimes it seems there are limitless ways to deploy networking equipment. While that may be the case for some networking gear, in actuality there are probably only a few proven deployment methodologies for web gateways that are effective and provide complete security. In this article, we’ll talk about the most four most common different types of web gateway deployments. Sometimes referred to as forward proxies; these devices are used to secure web access for an organization’s internal end-users. The three commonly used deployment scenarios for web gateways are: inline proxy, explicit proxy, transparent and SPAN port. Each one of these deployments has its advantages and disadvantages and we’ll discuss these as we explain each methodology over the next few days. For today's article we'll focus on inline deployments

Inline Proxy Deployment

Inline deployment is probably the simplest and easiest to describe. Smaller deployments, like branch office scenarios, typically use inline deployment, due to the ease of deployment and absolute security level that it provides.

With an inline deployment, the web gateway is placed directly in the path of all network traffic going to and from the internet. (See Figure 1). In this scenario, all network traffic will go through the web gateway device. If you choose this deployment methodology, make sure your web gateway is capable of bypassing network traffic that you don’t want processed by the web gateway. In many instances, you can choose to either “proxy” or “bypass” a specific protocol. If you “proxy” the protocol, that means the web gateway will terminate the traffic from the client to the server locally, and then re-establish a new connection acting as the client to the server to get the requested information.

Inline Deployment Advantages

The upside of an inline deployment is the ease of deployment, and the guaranteed assurance that all web traffic will flow through the device. There is no chance of a user bypassing controls, as long as the device is inline and in the only path available to the internet. It’s less likely an end-user can bypass a web gateway that is deployed using inline deployment, as all internet bound http traffic will be processed and handled by the web gateway. Inline is generally considered the most secure deployment methodology and the way to go if security is the primary concern.

Inline Deployment Disadvantages


The downside of an inline deployment is a single point of failure. Even with technologies, like “fail to wire”, which allows all traffic to flow through when a device fails, many organizations are uncomfortable with a single device in the data stream to the internet. Any partial failure of the device could cause an outage, which is the main concern for this deployment. For a small organization, or a branch office a short disruption is probably not as large a concern as it is for a larger organization which may view internet accessibility to be mission critical.

Another disadvantage with inline is a necessary requirement of managing all the protocols that are proxied by this web gateway (a side effect of this being the most secure method of deployment). Because the web gateway is inline, any other protocol (ftp, CIFS, etc), will either need to be proxied or bypassed (for protocols that the web gateway cannot handle) by the web gateway. The IT admin will need to administer this list and the handling of each protocol used by the organization.

Tomorrow, we'll look at Explicit Deployments.

Saturday, February 20, 2010

How Suspicious are Dynamic DNS Sites?

From: How Suspicious are Dynamic DNS Sites?


There is still no official update from Google providing details of the "Aurora" attack, but we continue to see second- and third-wave attacks in our logs. As it looks like most of the host sites are using "Dynamic DNS" subdomains, I thought this would be a good time to write about this often-abused part of the Internet.

DynDNS domains are a special type of "Free Web Host". A traditional free host provides you with a certain amount of space on one of their servers, and some sort of toolset to manage your site (think of geocities.com in the "good old days" of the Internet; freehostia.com is a current example, but there are hundreds). You choose an available domain name, which becomes a subdomain of the main site: e.g., mycooldomain.freehostia.com. DNS queries come in for your subdomain to the main DNS server for the host domain, and it resolves the request to point to the directory on the server farm where your site lives. You upload some content, and you're up and running.

A DynDNS "hosting" approach is similar externally (you pick a name that will become a subdomain of one of the DynDNS host's domains: mycooldomain.dyndns.biz or someotherdomain.dnsalias.net, and their domain's DNS server will resolve it), but there is a key difference: they don't actually host your site (just the name). Your site actually lives somewhere else on the internet, often at a dynamic IP address such as you might have at home with your Internet connection. You simply update their database whenever your IP address changes (either manually or via a script), and people can always find your site (except perhaps for short periods of time during an address transition).

This is a useful solution for tech hobbyists who want to play around with hosting their own domains on their own boxes -- e.g., to learn how a web server works, with full control over the box. Such a hobbyist site is unlikely to have a large user community (often just the hobbyist and a few friends), and probably hosts very eclectic content.

However, this sort of hosting is also a useful solution for a Bad Guy, particularly one who has a network of "bots" that can serve as invisible web servers. Using a DynDNS host, the Bad Guy can point his newmalwaredomain.com URL at any of the bots, and let them take turns serving content. By rotating the assigned domain name among widely separated bots, he makes it harder for the Good Guys to figure out where his base of operations is.

Knowing this, we created an internal web filtering category called "DynDNS" over a year ago, and began making a distinction between domains there and those in our traditional "WebHost" category. Externally (that is, from the customer's point of view), the DynDNS URLs show up in the WebHost category, but internally, we can study how they are being used.

"And how are they being used?" you ask.

Well, for this blog post, I pulled sample sites from the past week's traffic logs, grabbed a random set of 100 sites, and took a look. Here's how it broke down:













































Count Description
DynDNS Domain Usage
25 appeared to be legitimate sites (usually tech-oriented, as you might expect)
24 were DNS failures (i.e., the DynDNS host did not recognize the subdomain as valid)
21 timed out or returned a 404 (i.e., the host thought it was valid, but it never responded, or the server there says that the requested page doesn't exist)
10 were restricted-access pages in some way: either returning a 403 (not authorized) code or bringing up a password form blocking access
2 were Under Construction pages
18 were suspicious/shady in some way: a couple were hosting warez (audio files or movies), one was an open web proxy, and 15 had obvious junk/throwaway machine-generated names coupled with blank pages, 404s, or were abandoned
0 instances of malware or malware links were found (although of course some of the shady-name abandoned sites may have had malware on them originally)



(As you might expect, none of these sites had more than a few hits in the logs.)

Several of the "404" pages were actually from networks of sites that appear to be involved with affiliate clicks/sales of various goods (usually on Amazon). These used obvious machine-generated names (to be unique), most involving the "search terms" of the advertised goods. You get the 404 errors unless you have the proper "decorations" in your query. These appear to be non-malicious, just a bit on the "possibly scammy" side. Some examples were:

  • best-camera-films-yc.homelinux.com

  • bicycles-parts-cr.homelinux.com

  • cb-radios-lists-pr.homelinux.com

  • 3jewelryxfznn.dyndns.org

  • a156.e3e3b26.dyndns.org




  • So what can we conclude? Obviously, this sort of "hosting" tends to involve sites with short lifespans, but it's hard to argue from the log data that there is a hugely elevated risk of malware that would justify a blanket rating of "Suspicious" or "Malware" for all DynDNS sites.

    However, it is relatively easy for an individual customer (especially a business customer with above-average security needs) to make an argument for blanket-blocking all DynDNS domains, due to a lack of a strong business case for leaving them unblocked. There simply isn't a lot of valuable content out there in this ecosystem. (Of course, that's true of a lot more of the Internet than just DynDNS sites, but still...)

    The only exception that I can see for a business might be a special-purpose tech site (possibly even set up by the customer's own IT staff) for testing purposes, or for access to a particularly esoteric but vital type of data -- and the small number of these could be whitelisted as needed.

    Accordingly, I'd be interested in hearing from customers who would like the option of being able to blanket-block DynDNS sites. Is this something you would want?

    Friday, February 19, 2010

    Google Buzz attracting spammers already

    From: http://news.idg.no/cw/art.cfm?id=CF83BB31-1A64-67EA-E4C58F2F74DC1336

    Despite only being launched this week, spammers are already targeting Google Buzz, the search engine's social network, says Websense.

    Websense said that when Twitter launched it took a little while before it was targeted by spammers. However, in an indictment of how rapidly spammers are learning to abuse social networks, it took only two days before they started to hit Google Buzz.

    "It's worrying that spammers have an improved knowledge of social networks these days that allows them to hit new services like Google Buzz so rapidly," said Carl Leonard, security research manager at Websense.

    "To embrace social networks like Google Buzz safely, businesses need to protect themselves and their employees with a security solution that keeps up with constantly changing web content in real time."

    The security firm said Web 2.0 sites allowing user-generated content are a top target for cybercriminals and spammers, and research revealed that 95 percent of user-generated comments to blogs, chat rooms and message boards are spam or malicious.

    Furthermore, during the second half of the year, 81 percent of emails contained a malicious link.

    "Today's emerging threats often evade traditional antivirus and security solutions, demonstrating the need for unified Web, data and email security. With the right support, web 2.0 opens up a host of new opportunities which can deliver real business benefits."

    Websense is advising web users to use caution when clicking on unknown links. It also revealed it hopes Google is prepared to deal with the volume of spam it is bound to see on the new service.

    Thursday, February 18, 2010

    Critical Infrastructure Targeted by Malware

    From: Critical Infrastructure Targeted by Malware


    Organizations within the critical infrastructure, such as oil, energy and chemical industries, experienced a higher percentage of malware in 2009 than organizations in other sectors. According to web security firm ScanSafe, critical infrastructure companies experienced at least twice as much malware as other organizations.

    According to ScanSafe’s ‘Annual Global Threat Report 2009,’ the energy and oil industries suffered the largest amount of data-theft Trojans, experiencing over 350 percent more than other industries. Other sectors with a significant amount of contact with Trojans include government, chemical, banking and finance and pharmaceutical.

    Mary Landesman, senior security researcher at ScanSafe, said “There is a misconception that cybercriminals are only intent on stealing data intended for credit card fraud and identity theft. In reality, cybercriminals are casting a much wider net.”

    “Consumer credit card details are child’s play compared to the value of infrastructure and intellectual data from these sensitive verticals. The message is clear – cyberwar is already here. The Web is the battlefield and the enterprise is on the frontlines,” she said.

    The average company by the end of 2009 experienced 19 encounters with malware per day and almost one quarter of the malware was zero-day, meaning it is undetectable by signature-based methodologies.

    Wednesday, February 17, 2010

    Study finds that malware infections growing

    From: http://www.mysanantonio.com/business/84358082.html

    A recent report by a security startup company suggests that the number of Web pages infected with malware almost doubled in the last quarter, compared with a year ago.

    More than 560,000 Web sites and their approximately 5.5 million pages were infected with malware in 2009's fourth quarter, according to Dasient, based in Palo Alto, Calif.

    In those three months, sites for Fox Sports, technology blog Gizmodo and the Gerald R. Ford International Airport in Grand Rapids, Mich., were exploited to deliver malware to unsuspecting visitors.

    In contrast, a Microsoft security report identified about 3 million infected pages during the last quarter of 2008.

    The findings suggest that Web-based infections have proven an effective form of malware distribution for criminals, Dasient co-founder Neil Daswani said.

    “Web-based malware is working for attackers, and (they) have doubled their investment in these infection techniques,” he said.

    During the third quarter of 2009, Dasient found that more than 640,000 Web sites — comprising 5.8 million pages — were infected.

    While the amount of infections decreased from one quarter to the next, the figure has remained considerably high.

    Also, the likelihood of catching a bug from a larger, infected Web site grew. During that same period, hackers went from infecting a fifth to a fourth of all content in Web sites with 10 pages or more.

    “The implication for a Web site is, the more URLs get infected, the more difficult it is to identify where the infection occurred,” said Ameet Ranadive, another Dasient co-founder.

    And even after the malware was removed, four out of every 10 Web sites were reinfected in the fourth quarter.

    Not surprisingly, the number of sites that became infected rose right before and during Thanksgiving and Christmas, the busiest time for online shopping.

    Dasient researchers said they believe they are witnessing an important shift in the way malware is distributed.

    Typically, malware piggybacks on e-mail attachments or is distributed via online ruses such as fake anti-virus products that prompt potential victims to download malicious software into their computers.

    But this more recent brand of infection, known as a drive-by download, can turn even legitimate, trusted Web sites into potential infectors.

    In these attacks, malware begins downloading into victims' computers the minute they visit an infected Web site.

    Hackers can compromise Web sites through several techniques, such as exploiting vulnerabilities in Web applications, stealing the site's administrative credentials or infiltrating the site's ad network.

    “I used to say that drive-by downloads were an emerging threat, but that's no longer true,” said Adam Barth, a post-doctoral fellow at the University of California, Berkeley who has researched browser security.

    Some researchers point to recently reported attempts to compromise the Gmail accounts of Chinese human rights activists as an example of this type of attack.

    They believe criminals targeted and lured Google staffers into specific infected Web sites and exploited previously undiscovered vulnerabilities in Internet Explorer 6 to launch drive-by download attacks and to compromise their systems.

    In September, the New York Times' Web site also fell prey to this criminal tactic when hackers infiltrated the company's advertising network and managed to post an ad with malicious content.

    Tuesday, February 16, 2010

    Malicious Malware of First Ten Years of the 21st Century

    From: Malicious Malware of First Ten Years of the 21st Century

    ScanSafe has released the list of most malicious malware that formed the threat landscape, as the first decade of the millennium (21st century) coming to an end this year (2010).

    The security company states that "I LOVE YOU" worm came into existence in 2001. It is said that the worm, written in VBScript, is the most destructive of all times. It began in the Philippines on May 4, 2000 and circulated across the world in 24 hours, attacking 10% of all systems connected to the Web and causing damage of around $5.5 Billion.

    ScanSafe list shows that in mid-September 2001, the Nimda worm started circulation around the world. It was assisted by several means of proliferation one of them was the exploitation of numerous vulnerabilities in Microsoft IIS. Consequently, it became the most dangerous worm in 2002.

    In 2003, Sobig worm hit the news. Sobig infected systems were installed with a spam proxy, allowing mass-mailers to send huge chunks of unsolicited mail through victims' systems; even harvesting victims own mail contacts to add to the mailing list of spammers.

    ScanSafe also highlights that in 2004, Bagle worm appeared. It is a piece of malware that circulates by itself over mail, network shares and disk drives. It has rootkit abilities that enable it to conceal from the user. Further, the worm disables many antivirus solutions. Due to this, antivirus solution might not be able to conduct any definition updates.

    As per the security company, hackers' attacks become profit driven and clearly illegal in 2005. With tsunami hitting several homes in December 2004, hackers started exploiting people's panic and inquisitiveness by publishing breaking news alerts.

    ScanSafe adds that in 2006, the Storm botnet was in progress. Storm worm is a Trojan horse with an executable file as an attachment.

    A PHP-based malicious kit 'MPack' was created by Russian hackers and released in 2007. The first version of kit was supposed to have been launched in December 2006. It is believed that nearly every month a new edition of the kit has been released since its inception.

    Goolag and Gumblar are other malware which created uproar in 2008 and 2009 respectively. On a concluding note, ScanSafe states that the 2010 threat landscape will be more harmful than the previous years.

    Thursday, February 11, 2010

    Malicious code hits record-high in Jan

    From: http://www.zdnetasia.com/news/security/0,39044215,62061127,00.htm


    The amount of unique malware tracked by security vendor Fortinet, reached an all-time high in January.

    Its distinct malware volume soared to over 9,000 last month, more than twice that in December, the company said in a statement Wednesday. Headquartered in Sunnyvale, Calif., Fortinet collects data from its FortiGate network security appliances and intelligence systems located globally, and compiles monthly threat statistics from the data.

    Topping the charts were variants of Bredolab, accounting for more than 40 percent of all malware activity. The Bredolab downloader program, which has assumed the No. 1 position since November 2009, has been associated with the Gumblar attacks, said Fortinet.

    Also highlighted in the report was the wave of attacks known as Operation Aurora--a major talking point following Google's threat last month to pull out of China. Fortinet said the attack, which uses a zero-day vulnerability in Microsoft's Internet Explorer browser, was ranked No. 4 on the list of top 10 attacks for January.

    The peak volume of threat activity last month signaled that 2010 will likely be "another action-packed year", Derek Manky, Fortinet's project manager for cybersecurity and threat research, said in the statement.

    "The amount of malicious code in the wild is increasing...while in-the-wild exploits and emerging zero-day attacks targeting very popular software, like Microsoft IE and Adobe PDF, create a vulnerable environment for users at every point of connectivity," he noted. "As the monetary gains of these threats continue to prove [valuable] to the criminals creating them, we'll only continue to see new and creative attacks take form."

    Wednesday, February 10, 2010

    Huge spike in companies blocking Facebook, Twitter: study

    From: http://www.windsorstar.com/jobs/job-listings/Huge+spike+companies+blocking+Facebook+Twitter+study/1913402/story.html

    A growing number of employers are refusing to be Facebook's friend.

    Companies around the world are increasingly choking off their employees' access to social-networking websites, such as Facebook, Twitter and MySpace, says ScanSafe, one of the Internet's biggest security providers.

    In the past six months alone, there's been a 20 per cent increase in the number of companies blocking such websites, says ScanSafe, which released a study on the phenomenon this week.

    "When web filtering first became an option for companies, we generally saw them block access to typical categories, such as pornography, illegal activities and hate and discrimination," said ScanSafe spokesman Spencer Parker.

    "I imagine, before long, social networking will be up there with pornography in terms of categories blocked."

    The company says 76 per cent of its customers are now choosing to stonewall social-networking sites, a higher percentage than those who block online categories such as shopping, weapons and alcohol.

    ScanSafe analyzed more than a billion web searches each month for its study.

    Parker said social-networking sites can open the door to viruses, as well as being a drain on productivity and bandwidth.

    James Norrie, the associate dean and professor at Ryerson University's Ted Rogers School of Management, harshly criticized the trend.

    He said banning employees from using social-networking sites is "one of the most awful things businesses can do to themselves."

    "The whole notion of trying to take technology away from (workers) is as good as spanking them and sending them to their room," said Norrie.

    Instead, companies should be teaching their employees to use social media so they can promote the company's brand online, he suggested.

    Taking the privilege away will only encourage skilled workers to seek out more dynamic employers, he said.

    "What employer that wants to be seen as progressive and attractive for a new generation of workers would think that it was good for their employee brand to block access to social computing sites?" Norrie said.

    "If they keep putting their heads in the sand, they'll fall so far behind that someone else will be eating their lunch."

    Norrie didn't dispute the research was genuine, but called ScanSafe's study "self-serving."

    David Zweig, who teaches human resources at the University of Toronto, said ScanSafe's global numbers are a good reflection of the current trend here in Canada.

    "We don't have very good stats in terms of how much employee monitoring takes place in Canada, but it is certainly on the increase," he said, adding Canadian employers are also increasingly monitoring e-mail.

    "Employees must assume that what they're doing at work is being monitored, and act accordingly."

    Zweig said when people are spending time on social-networking sites they're not working, so it's easy to see why employers would want to block employees from accessing the sites.

    "They want to stop people from potentially wasting time at work surfing these sites, especially if it's not job relevant," he said.

    Still, Zweig said employers need to communicate clearly to their employees what sites they're blocking and why or face the prospect of employee deviancy from workers who feel that their employers don't trust them or that they're being treated unfairly.

    "They'll do things to get around the electronic gaze, for example," he said. "It can actually create a vicious cycle where doing this actually creates more deviant behaviour to get away from these restrictions and controls."

    ScanSafe also found an increase in the number of companies choosing to block websites about travel and sports, as well as web-based e-mail.

    Saturday, February 6, 2010

    Twitter Resets User Passwords in Wake of Phishing Attack

    From: http://www.pcworld.com/article/188392/twitter_resets_user_passwords_in_wake_of_phishing_attack.html


    Early Tuesday, Twitter says it had to reset the passwords of a small number of accounts compromised in an external phishing attack.

    "As part of Twitter's ongoing security efforts, we reset passwords for a small number of accounts that we believe may have been compromised offsite," Twitter wrote in a prepared statement.

    Twitter said it took the security action because of a "combination of multiple bad acts." One, it believes, is accounts being compromised by Twitter users signing up for what it described as "get followers fast schemes" luring people to a non-Twitter site. A Twitter spokesperson also said it suspects this third-party site "could have allowed hackers to gain access to email addresses and passwords. Those Twitter users who use the same email addresses and passwords could be affected."

    Graphic: Diego Aguirre
    According to Twitter at least one account was compromised by a phisher. In that instance Twitter updates were sent out without the account owners knowledge, Twitter said. "While we're still investigating and ensuring that the appropriate parties are notified, we do believe that the steps we've taken should ensure user safety," Twitter says.

    Twitter is no stranger to account hijacking. On Jan. 5, 2009, 33 prominent Twitterers (including Barack Obama and Britney Spears) had their accounts hacked by an individual. The hacker reportedly hacked the Twitter support tools (the tools Twitter uses to help users reset emails and passwords) and reset the passwords of the compromised accounts. In response to the attack, Twitter immediately shut down the support tools and restored the accounts to their rightful owners.

    On May 21, 2009, Twitter was hit by a phishing attack in which phishers created fake Twitter accounts and began following legitimate Twitter users. The Twitter users received email notifications of their new followers, with a link that lead them to a fake Twitter site where they were prompted to enter their usernames and passwords.

    Twitter isn't alone grappling with phishing attacks. Recently Facebook joined forces with McAfee to offer it users free antivirus software and increased protection from third-party phishing attacks.

    Since phishing attacks usually occur when people click on rogue links in emails (without checking to ensure that the emails are from who they say they're from), there's not much Twitter could have done to prevent the attack. However, security breaches like this one are unlikely to help Twitter's falling growth rate.

    Friday, February 5, 2010

    Timeline: A Decade of Malware

    From: http://news.idg.no/cw/art.cfm?id=8FB40E6D-1A64-67EA-E469E68A69E17435

    With first decade of the millennium coming to a close this year, it seems a good time to take a look back at some of the malware that has helped shape the current-day attacks on the Web. Modern malware is commercially motivated. Instead of writing malware for ego gratification, today's attackers are using malware to make money. Looking back at the most notable malware of the last ten years, we begin to see how the industry has taken shape. From pesky spam pranks to a multi-million dollar 'black hat' industry, malware continues to evolve at a rapid pace, with no signs of slowing.

    1. 2001: Loveletter steals free Internet access Modern malware is commercially motivated. Instead of writing malware for ego gratification, today's attackers are using malware to make money. In hindsight, the May 2000 Loveletter worm was a harbinger of things to come. The Loveletter worm combined social engineering (love letter for you) with a password-stealing trojan designed to harvest ISP usernames and passwords. The intent: to provide free Internet access to the worm's author (Read about current social engineering tactics in CSO's social engineering guide).

    2. 2002: JS/Exception bombs usher in malicious marketing In mid-September 2001, the Nimda worm began its rapid spread around the globe, facilitated by multiple means of propagation. One of the methods included modifying any .htm, .html, or .asp pages found on infected systems. The worm also spread by exploiting several vulnerabilities in Microsoft IIS, furthering the worm's ability to infect Web pages. As such, Nimda can be viewed as a pioneer in malware's eventual move to the Web.

    3. 2003: Sobig worm popularizes spam proxy trojans January 2003 ushered in the Sobig worm, a significant threat not fully appreciated until Sobig.E and Sobig.F appeared in the summer of that same year. Sobig-infected computers were outfitted with a spam proxy, enabling mass-mailers to send large volumes of unwanted email via victim computers; even harvesting the victims own email contacts to add to the spammers' mailing lists.

    4. 2004: Bagle worm vies for dominance to harvest addresses and account information The monetary gains to be had from harvesting email addresses became even more apparent during the subsequent email worm wars in early 2004. Beginning with MyDoom and the Bagle worm, an interloper (Netsky) quickly jumped into the fray. The authors of Bagle then began coding variants of their worm that, in addition to dropping their own malware, would also remove Netsky. In turn, the Netsky author began neutering the MyDoom/Bagle infections while adding his own malicious code to the system. This prompted a response from the Bagle authors; hidden in Bagle.K's code was the message, "Hey Netsky, f*ck off you b*tch, don't ruine our business, wanna start a war?"

    5. 2005: Bot-delivering breaking news alerts Following the worm wars, named threats became fewer as attacks became more overtly criminal and profit motivated. To bypass technology, clever attackers began incorporating a much higher degree of social engineering in their attacks. In January 2005, following the previous month's tsunami in the Indian ocean, scammers began targeting people's fear and curiosity through breaking news alerts. Links in the email that claimed to point to headline news actually pointed to malicious malware that turned victim computers into bots (Read about how botnets are hunted and destroyed in The Botnet Hunters).

    6. 2006: The as-yet-unnamed Storm worm emerges By 2006, the Storm botnet was formally underway, though not named as such until January 2007, after a bogus breaking news alert claimed "230 dead as storm batters Europe." Coincidental to the alert, a very real storm in Europe did cause loss of life, thus earning the trojan family (and its associated botnet) its new name, Storm (Also see: How a Botnet Gets its Name).

    7. 2007: MPack publicity popularizes exploit frameworks In 2007, publicity around MPack led to heightened adoption of exploit frameworks in general, laying the groundwork for managed Web attacks. The release of free or low cost SQL injection tools in the Fall of 2007.

    8. 2008: Goolag and automated injection attacks complete cloud-based malware-as-a-service In 2008, remote discovery tools such as Goolag further cemented cloud-based malware delivery via the Web. These attacks quickly proved profitable and shifted the value proposition from spam and malicious marketing to stolen FTP credentials and intellectual/financial property theft. Cloud-based distribution of malware also increased the sophistication of malware creation kits, thus doubling the volume of malware with exponential year-over-year increases

    9. 2009: Gumblar incorporates and expands a decade's evolution of malware The 2009 Gumblar attacks can be viewed as the culmination of a decade's evolution of criminal/profit-motivated malware. Gumblar creates two sets of botnets: client-side traditional backdoors and a second, never before seen botnet compromised of thousands of backdoored websites. Gumblar includes a forced redirect revenue stream for the Gumblar creators thus providing instant monetization, as well as long term potential profits via its ability to intercept, tamper with and steal Internet and network communications. Gumblar also includes the ultimate in social engineering; turning perfectly good, reputable websites against their visitors.

    10. 2010: ? If the poorly coded and fairly innocuous Loveletter ushered in the beginning of the decade, and the highly sophisticated, multi-pronged Gumblar is ending the decade, one can only wonder, and worry, at what the next ten years may bring (Also see: 10 IT Security Predictions for 2010).

    Mary Landesman is a senior security researcher with ScanSafe, a provider of SaaS Web security products.

    Thursday, February 4, 2010

    Searches for ‘iPad’ lead to malicous sites

    From: http://www.safekids.com/2010/01/30/searches-for-ipad-leads-to-malicous-sites/

    Security companies are warning consumers and Web site operators to be wary of iPad-related search scams.

    “This is just the kind of opportunity fraudsters like to exploit by poisoning search terms,” said Symantec’s Candid Wueest. Wueest also warned about “iPad-related spam and phishing attacks hitting consumers hard over the coming weeks.”

    Don Debolt, CA’s director of threat research, warned about “black hat search optimization”–a scam whereby hackers take advantage of security flaws in blogs and other sites that use PHP scripting language to embed popular search terms like iPad to trick search engines into directing people to compromised legitimate sites that may have nothing to with the subject matter at hand. If people click on the link to a page on that infected site, they are then redirected to a malicious site that can implant malware on their machine or tempt them to install a rogue security product.

    It has nothing to do with the iPad itself. Similar techniques have exploited other popular searches such as the Haitian earthquake and the death of Michael Jackson. Google has a trends page that shows hot topics and hot searches. On Thursday afternoon, the iPad was represented four times on the Top 10 list. “Obama State of the Union” led the list.

    The entire process is automated, said Debolt. “We found that it’s a very systematic and programmatic process right now.” The attackers, he said, are using software to query search engines to find out the popular search topics and then “feeding that information into compromised Web sites so that those compromised sites and the content they put on those sites get indexed by the search engine bots.” To the end user it looks as if those sites have relevant content, but when you click on those pages, you are immediately taken to another site that has the malware.

    Debolt warns people to be careful if a search engine points to a site where “the root domain of the URL doesn’t have any type of affiliation to the topic or is not an information portal you’re familiar with.” He warns site operators, especially those with a content management system that uses PHP, including Joomla, WordPress, and Droopa, to be sure they are using the latest version of their Web software.

    I have a bit of experience with injected code. I operate a number of WordPress blogs including SafeKids.com which, a few years ago started serving up Google ads for Viagra and other male enhancement products. These were far from appropriate context-sensitive ads for an Internet safety site and when I took a look at my site’s code, I discovered that there were hundreds of links and terms that had been injected to my site as a result of a security flaw in my WordPress template. I replaced the template and updated the WordPress software and the problem went away. Now I’m careful to make sure I’m always running the latest version of WordPress.

    As usual, people are cautioned to make sure they are using up-to-date security software and that both their operating system and browser are up to date.

    Wednesday, February 3, 2010

    Chinese Government Website Hacked to Spread Malware

    From: http://www.spamfighter.com/News-13840-Chinese-Government-Website-Hacked-to-Spread-Malware.htm

    Chinese Government Website Hacked to Spread Malware

    Mike Geide, Senior Security Researcher at Zscaler, has found that a Chinese government website has been recently infected. The infection is found similar to the one during an Internet assault, which reportedly drove Google to say that it'd stop operating on the Chinese Internet, as reported by USA TODAY on January 26, 2010.

    Geide became aware of the infection hitting 'latax.gov.cn' when he visited a forum that posted a report stating that users accessing the website could become infected with malware.

    The researcher also states that anyone following the government website containing tax payment related information will face the risk of receiving infection. A malware infiltrate into the visitor's system through a newly found vulnerability in the Microsoft Web-browser - Internet Explorer. A backdoor component on the target computer will be created that will help the attacker to plant a program for activating the computer's webcam and capturing sensitive data without leaving a clue, Geide explains.

    The original malware threat existed in the indexing page of the website. For executing the attack, a popular crime-ware kit of do-it-yourself type called 'Hupigon' was used.

    The security researchers state that upon installation, 'Hupigon' can present different utilities to the attackers. A rootkit feature in Hupigon makes its detection especially difficult.

    Moreover, Hupigon's menu-driven controls come in Chinese language and the place for the kit's trading is also the Chinese language forums. These apparently suggest that it is the Chinese individuals themselves who were responsible for infecting www.latax.gov.cn.

    Geide also wrote in a blog posting that it was just one instance where zero-day vulnerability in IE affected gov.cn websites; he had observed other similar reports as well, as reported by Zscaler on January 25, 2010.

    According to the researcher, the question that arises is whether these attacks are designed to potentially monitor 'citizens' activities, or whether hackers have infiltrated the gov.cn sites. Whatever the case may be, users are recommended that they do safe browsing, while those using IE 6 should upgrade or switch to another browser, he added.

    Tuesday, February 2, 2010

    A Look at the Google Hack (aka the 'Aurora' Attack)

    From: http://www.bluecoat.com/blog/look-google-hack-aka-aurora-attack

    Background

    In what is easily the biggest malware story since the Conficker outbreak, Google announced on January 12th that they'd been investigating "a highly sophisticated and targeted attack" on their systems. Further, they took the unusual step of openly stating that the attack originated from China, and that they had found evidence that more than twenty other larger corporations had also been targeted.

    A couple of days later, this was followed by news that the attack used a vulnerability in Microsoft's Internet Explorer browser. (Late last week, news leaked out that Microsoft had known about this vuln since last August, that a patch had been developed, and was in testing for a scheduled February rollout.) An IE patch for this vuln plus six or seven others that were pending was rolled out by Microsoft on the 21st.

    Because Google has not released an official update on their blog since the original Jan. 12th post, the security community has been trying to piece together more of the details. Here is a sampling of what we know so far:

    The "Command and Control" servers for this attack were traced by Google to Taiwan (which is where their engineers accessed a server that contained the evidence of the attacks on the other companies). This by itself, of course, would not prove a Chinese origin for the attacks, as servers can be hosted anywhere (this is the Internet, after all!) However, the fact that one of the key datasets sought by the attackers was the Gmail accounts of Chinese human rights activists, and a bit of nifty sleuthing based on a unique algorithm in the malware source code, point more directly to China. (BTW, that last link also shows where the "aurora" name came from.)

    Technical Details

    While some have questioned the plausibility that all of the companies were hacked via the same IE6-only exploit, that's the official story so far, so we'll stick to that. Basically, this bug in IE is found in all versions since v6, but the exploit used in the attacks was only functional with IE6 -- it would require modification to work in IE7, and further modification to work in IE8, which contains additional safeguards against this sort of thing.

    At some point, someone investigating the malware apparently posted a sample of the javascript-based exploit trigger to wepawet (an experimental tool that looks for suspicious javascript in web pages). Someone else noticed, and wrote up a formalized version of the exploit. From there, the rest of the Bad Guy universe, who didn't know about the new exploit, have picked it up and added it to their exploit toolkits.

    We can assume that the IE6 exploit packages will soon be joined by versions that target IE7 (and possibly IE8), if they haven't already. But the fact is that there are dozens and dozens of recent (and not-so-recent!) exploits still being deployed by the Bad Guys in their attacks every day. (Even long after patches have been released by the targeted software vendors, the Bad Guys know that there are always plenty of unpatched systems out there.) So, we can confidently predict that future attacks will certainly contain exploits based on "aurora". As so many exploits have before, it will quickly make the transition from "new and scary" to "old hat". Fortunately, the Good Guys also know about it now, which means we're back to a normal status quo: if you've got good defenses, and you're adequately patched, you'll be OK, but if you're not, then you'll probably get infected, just like always.

    On the WebPulse™ front, as details about the attack emerged, we began identifying the sites being used to host the attack malware (and, more importantly, what the traffic looked like). Most of the malicious sites were hosted on subdomains served via Dynamic DNS hosting services. As I look back through the database, it appears that these sites came into use after the exploit was publicly posted. In other words, these represented the second wave of Aurora-based attacks. My hunch is that the original attacks that Google noticed back in December are completely separate from this second wave. (I'm still digging through logs going back a month; I'll post anything interesting I find.)

    Implications

    First of all, whenever security pros hear "targeted attack" they think "spear phishing". Unlike normal phishing, which relies on very widespread spam or social networking attack vectors, spear phishing focuses on a particular group of targets. If an attacker can put together a list of email addresses for a particular corporation (not too hard), and a general idea of the org chart (also not too hard), they can produce a much more believable email than a common spammer can. With the headers forged to look like it's coming from someone within the organization, and its content dealing with some current event relevant to the company, the recipients are skillfully manipulated from the "don't open attachments in emails from people you don't know" mindset to the "it's OK to click on links in emails from people you do know" mindset. If even one of the recipients with a vulnerable system takes the bait, the exploit triggers.

    The spear-phish email contains a link to a malicious Web page, with the exploit contained directly in its javascript (for browser exploits), or else the script downloads an auxiliary file with an exploit that targets a browser plug-in (bogus PDF files being a continuing favorite). In either case, the computer's security is "exploited", and the attacker can then direct the browser to secretly download the actual malware package. Once the malware is installed, the attacker has a foothold in the corporate network, and can begin searching for the desired data.

    In a nutshell, this is the scary part: thirty big corporations had their systems penetrated, and it looks like none of them knew about it until Google raised the alert. (I'm assuming that what Google found on the enemy server was evidence of successful attacks -- actual evidence of stolen data, not just a list of targets being attacked.) The victims' anti-spam defenses failed to catch the bogus emails. Their web filter permitted the link to the site with the exploit, and the connection to download the malware payload. Their antivirus didn't recognize the malware download (as it was brand-new for this attack). Their IDS/IPS (Intrusion Detection/Prevention Systems) didn't spot it. And their DLP (Data Loss Prevention) systems didn't catch the data as it left.

    Or maybe not. It might not be that bad. Lacking many details from Google and Adobe (the only two companies who have said much about it), and not knowing anything about the other victims, we're only guessing about the scope and severity. But it sure looks like the Bad Guys defeated all the Good Guy technology and won this round, and it was up to an alert sysadmin at Google to notice something amiss in the network logs and start investigating....

    As you might expect for a story of this size, we'll be doing further blog posts in the future, looking at specific aspects of this attack and the implications for corporate web security.

    Further Reading

    A very interesting commentary thread emerged that focused on one of the apparent targets within Google: a system designed to collect user account information (including email headers but not email content, for example) so that it could be more easily handed to government agencies with search warrants. See commentary here and here on the security ramifications of building systems with such "official security vulnerability" access points, with the original source for the story here.

    And, for an excellent four-page overview of the whole incident and side stories, with plenty of links, try the zdnet blog.

    Monday, February 1, 2010

    Survey Finds Growing Fear of Cyberattacks

    From: http://www.nytimes.com/2010/02/02/us/29cyber.html

    A survey of 600 computing and computer-security executives in 14 countries suggests that attacks on the Internet pose a growing threat to the energy and communication systems that underlie modern society.

    The findings, issued Thursday by the Center for Strategic and International Studies and the computer-security company McAfee, echoed alarms raised this month by Google after it experienced a wave of cyberattacks.

    “One of the striking things we determined is that half of the respondents believe they have already been attacked by sophisticated government intruders,” said the study’s director, Stewart A. Baker. “It tells us that this is a serious problem right now.”

    More than half of the executives called their own nation’s laws inadequate for deterring cyberattacks. Half identified the United States as one of the three most vulnerable countries; the others were China and Russia.

    Moreover, the United States was identified most frequently as a potential source of cyberattacks.

    “When they were asked which country ‘you worry is of greatest concern in the context of network attacks against your country/sector,’ 36 percent named the United States and 33 percent China — more than any other country on a list of six,” the report said.

    China’s security measures also came in for praise from the executives.

    “It was striking how much of an outlier China is on a number of measures,” said Mr. Baker, a Washington lawyer who formerly served as assistant secretary for policy at the Department of Homeland Security and as general counsel for the National Security Agency. “They have confidence in their government, and they are adopting security measures at a higher rate than other countries.”

    The report focuses on “critical infrastructure” — essential networks and services that include the financial system, transmission lines for gas and electricity, water supply, and voice- and data-communication networks. At the heart of these systems are networks known as Scada systems, which are the basis for manufacturing, power generation, refining and other basic operations in advanced economies. (The acronym stands for supervisory control and data acquisition.)

    The increasing use of Internet-based networks “creates unique and troubling vulnerabilities,” the report says. In the past, the data used by such industrial systems was largely carried on proprietary networks that were often better insulated from the outside world.

    The advantage of the Internet lies largely in the lower cost of developing systems because of the low cost of commodity products. But the report’s authors stopped short of calling for a complete separation between those systems and the open Internet.

    “Remote access to control systems poses a huge danger,” said Phyllis Schneck, McAfee’s vice president for threat intelligence. “We must either protect it appropriately or move it to more private networks and not use the open Internet.”

    The report found considerable pessimism among the executives, whose responses were anonymous.

    “Remarkably, two-fifths of these I.T. executives expected a major cybersecurity incident (one causing an outage of ‘at least 24 hours, loss of life or ... failure of a company’) in their sector within the next year,” the report said. “All but 20 percent expected such an incident within five years. This pessimism was particularly marked in the countries already experiencing the highest levels of serious attacks.”