Contrary to some of the more extravagant vendor claims at the moment, Web 2.0 is not a revolution of the internet, but an evolution in the way users and computers interact between themselves and each other across the net. Yuval Ben-Itzhak, Finjan investigates the difficulties in securing the web.

Web 2.0 is actually an amalgam of the non-interactive World Wide Web of around four or five years ago, but with the addition of community input. Members of the community upload their favourite content - e.g. music, video or text - and share it with each other on the web. 

And now the bad news; just as web users can upload their content, so can the hackers - uploading malicious content to various sites and so infecting the community at large.

The net result of this is that all manner of issues start to rear their ugly head with Web 2.0 - internal and external security; legal liability (direct, indirect and consequential); and regulatory plus compliance issues.

And that's just for starters.

One of the major worries for IT managers seeking to protect their IT resources connected to, and based on, the internet, has been the development of rich internet applications in recent years.

These applications - which include Adobe Flash and AJAX (Asynchronous Javascript and XML)  - allow a web page (or element) to request an update on the content, or part of the content, and to alter that section of the users' remote web browser, all without having to refresh the whole page.

The problem facing the modern IT manager is that businesses do not block users from visiting these rich application web sites, as they form the mainstay of Web 2.0 and, as such, offer users some of the best content available.

Despite these advantages, these rich applications can load additional content after the main page has displayed in the browser. This poses a problem, since URL filters and anti-virus applications are looking for signatures, which are impossible to spot when the content is effectively being loaded in parts.

The situation is made worse by the fact that the majority of Web 2.0 sites are more prone to attack since they have more interactions with the browser and require relatively complex and extensible code to run on user's machines.

Protecting the IT resource and reducing legal liabilities

The biggest problem facing IT managers when seeking to counter Web 2.0 risks is the fact that web sites such as Wikipedia, MySpace and Flickr, are perceived as 'trusted' by many URL filtering and categorisation security products.

Coupled with the concealment of malicious code, tackling the problem of Web 2.0 threats requires a multi-vector approach, typically involving both proactive and reactive IT security technologies by the user organisation, as well as the company hosting the web pages in question.

To achieve this objective requires the installation of an appliance - typically at the edge of the IT resource - which performs real-time code inspection of traffic flowing across the network boundaries.

Using this approach, coupled with a rapid response patching system for zero hour protection against software vulnerabilities, and, of course, the usual anti-virus and anti-spyware technologies, can stop any spyware or similarly malware-laden attacks from crossing the enterprise gateway.

Other options that should be considered in a Web 2.0 security appliance include the use of multi-vendor anti-virus engines and URL filtering technology, as well as SSL inspection for scanning encrypted content and enforcing digital certificates.

The appliance should also be capable of analysing each web request in real time and at the gateway between the browser and web servers. Using this approach allows even the most complex web pages from, for example, MySpace.com, to be scanned for malware in the same way as more two-dimensional web pages.

Web 2.0 security futures

Research conducted by NetBenefit in the UK in May 2007 found that 60 per cent of users are actively using Web 2.0 technologies in the form of blogs, AJAX-enabled web sites and mash-ups.

As Web 2.0 technology is progressively rolled out to both business and consumer users of the internet, there can be little doubt that the associated IT security risks will escalate.

And, as previously unknown threats and exploits arrive on the web, it is Finjan's belief that the use of multiple IT security solutions from several vendors - preferably operating under the control of a single console - must become the standard approach for any organisation seeking to protect its internet-connected assets.

This approach should be mirrored by a similarly responsible attitude towards site security by web site operators of all sizes. They should scan both the incoming and outgoing code to stop any malware in its tracks.

This security feature is important as Finjan researchers have discovered that AJAX can be used to query back-end web services automatically, so providing an opening for hackers to coordinate invisible attacks using AJAX queries, since the code is never revealed on the site.

Perhaps worse, the code can be SSL encrypted whilst in transit making it invisible to conventional URL filtering or digital signature technology.

The final building block in this multi-vector security strategy must come from the internet service providers we all use for accessing the Internet.

All ISPs should install anti-malware scanning technology as a standard feature of their service.

Only when this happens will the internet become a safer place to conduct business, until then each business should install it in its perimeter.

Whilst Web 2.0 and, in particular, AJAX technologies, have enhanced the user experience and added important business functionality, they have also introduced opportunities for hackers to invisibly inject and propagate their malicious code.

Conventional reactive signature-based IT security technology was never designed to detect these types of dynamic Web malware. As a result, the need for multi-vector and multi-vendor protection will become ever more necessary.

The dangers of Web 2.0-enabled web sites revealed

Back in December 2006, hackers compromised the MySpace social networking site and infected significant numbers of user profiles with a worm.

The malware exploited a known vulnerability to replace the legitimate links on the user profiles with URLs leading to a phishing web site, where - unsurprisingly - users were asked to enter their IDs and passwords.

The sting in the tail was that, as well as passing on details of their IDs and passwords to the hackers, the worm also embedded an infected video file into victims' user profiles, so ensuring propagation to other hapless users of the MySpace service.

This infection by proxy issue could have been prevented with MySpace installing security software that monitored for unusual activity on its system and which locks down any potential malware at the earliest possible stage.

The use of a Web 2.0 platform for malicious purposes was also identified by Finjan's Malicious Code Research Centre in April 2007, when a site offering art directory services was infected by hidden malware.

The AJAX-driven malware, which was invisible to conventional anti-virus technology, caused users to auto-download a Trojan from a remote server without any required interaction. The act of simply visiting the page in question was enough to infect the user’s machine.

Another Web 2.0 malware example involved an online banner advertisement which ran on MySpace.com that exploited a Windows vulnerability to infect more than a million users with spyware.

Internet Explorer users who visited a web page containing this ad and whose browser was not equipped with the latest WMF patch found their machines silently downloading an adware-driven Trojan tracker.

Despite the fact that the WMF vulnerability was patched in January 2006, by targeting a high-traffic web site, the hackers were still able to achieve mass infections later in the year.