Introduction
Sûnnet Beskerming Pty. Ltd. occasionally produces small reports that are for free (gratis) distribution. The free content may cover any area that Sûnnet Beskerming operates in. Examples may include generic security advice, specific security warnings, development practices, and application tuning. The only caveat on reuse of information from this site is in accordance with the following paragraph.
Use and reuse of information from this site requires written acknowledgement of the source for printed materials, and a hyperlink to the parent Sûnnet Beskerming page for online reproduction. Content from this page can not be reused in a commercial context without negotiating an appropriate licence with the site owner. Personal and educational use is granted without additional restriction beyond an amount in accordance with the principle of "fair use". Fair judgement is encouraged from site users as to what amounts to "fair use". Please contact us if you reuse our content, so that we may be able to provide more specific advice when necessary to improve your reproduction.
Sûnnet Beskerming do not normally offer services and products direct to the consumer, with this weekly column as the primary exception. One of the primary difficulties with a weekly column is ensuring that the content being reported remains fresh and relevant, even when it may be more than a week out of date at time of publishing. To remedy this situation, and to provide more timely information for people who desire up to the minute news, Sûnnet Beskerming is announcing the establishment of a mailing list which will provide up to the minute news on emerging threats, advice on good security practices, analysis and explanation of technical news items which may have an impact on your future IT purchases, and collation and distillation of multiple news sources to provide you with a brief, accurate, non-biased synopsis of technology trends, with a focus on security. Sûnnet Beskerming do not restrict the focus of their services to only one operating system or hardware platform, which allows you an equal level of service even if you do not run the leading Operating Systems.
Having as little as a few hours warning is enough to protect your systems against rapidly emerging threats. Some of the most prolific worms and viruses in existence can infect all vulnerable systems within a matter of hours, so every second counts. This is where having Sûnnet Beskerming services helps.
As a recent example, you would have been informed of the recent network compromise which resulted in up to 40 million credit card details being compromised a full 12 hours before it was being reported in the major Information Technology news sites, and more than four days before it was being reported in the mainstream media.
Sometimes we are even faster than Google, being able to deliver timely, accurate information before any related content appears in the Google search results.
Not many people can afford to be dedicated full time to searching and identifying this information, and so tend to find out once something bad has already happened to their systems. Let Sûnnet Beskerming use their resources to bring you this information before you find it out the hard way.
Sûnnet Beskerming are offering a free trial membership period for consumer subscribers to the mailing list (Businesses have their own, similar list, with added services). For subscription information, or more information, please send an email to info@skiifwrald.com.
Fun With Numbers - 25 July 2005
The technologies that are used to deliver Internet content such as HTML are subject to interpretation by the Internet browser companies, as they strive to produce a browser which renders Internet content in a consistent manner. As a result, the different browsers rely upon different rendering technologies, which means the presentation of a website will differ from browser to browser. This is seen when a website may not render correctly in FireFox, but will render correctly in Safari. Bodies interested in establishing consistent compliance in rendering have little sway with software developers, but their tests provide interesting insights into browser differences. For example, CSS rendering can be checked against the ACID and ACID2 tests, which establish how well a browser complies with the RFC documents, and webpages can be validated for accurate HTML, XHTML, and CSS to help determine how they should be rendered.
Several months ago, a toolset was released to the security community which was designed to feed improper HTML content to a browser and observe whether it would crash or hang (usually the first step towards an exploitable issue). Bugs identified as a result of this have been fixed in a number of browsers, and some minor exploits were released which played off issues discovered. The same people responsible for this tool have turned their attention to image handling by browsers. Unlike the corrupted HTML tests, pages with intentionally corrupted images will still render (with images turned off), but will cause various browsers to crash or hang if they attempt to render the images. The release of this tool caused a little bit of consternation amongst security researchers as it highlighted some serious vulnerabilities with popular Internet browsers, also that it was far more trivial to implement than the corrupted HTML bugs, and detection was next to impossible without actually attempting to render the image.
For the end user, it is important that you apply caution to your online actions, and be aware that your browser could be crashed or compromised without you being able to stop it. You can't really do much more until the browser vendors release patches to fix the flaws, unless you switch browsers to a less vulnerable browser, or a less vulnerable operating system. In reality, it will most likely be a low rate of exploitation, but the chance exists that a major infection / exploitation vector could have just been opened. Fears like this have started to force people offline, as they begin to feel real risk from the numerous vulnerabilities in existence.
Following the debacle with CardSystems, when more than 40 million credit card details were compromised, VISA has cut their business with CardSystems. According to The Register, VISA USA has given 11 banks until the end of October to switch payment processing companies, away from CardSystems. If other credit card providers followed the same steps, the future of CardSystems as a company could be threatened.
In privacy breach news, the University of Southern California recently admitted to a breach in their network which could have resulted in more than 250,000 people having their application data compromised. The application data also included identity theft staple SSN. The University was unable to verify how many records had been compromised because they had not ability to track which records had been accessed. It was believed that there was no mass theft of records due to only random records being available at any time (although automation can overcome this limitation). The University was also not able to verify exactly who had their records compromised.
As a result of a breach reported earlier this year in this column, ChoicePoint has announced that they suffered a $6 million USD cost in the second financial quarter, along with a $5.4 million USD cost in the first quarter. For 145,000 individual records, that works out to $78 USD per record in direct cost for ChoicePoint. Analysts reporting on the case have noted that breaches of personal information are becoming more costly to the companies at fault due to added reporting requirements, such as those laid down by SB 1386 and similar laws. The result is breaches which have been ongoing for a long period are now becoming publicly announced, and released.
A low threat worm currently doing the rounds of the AIM messaging network aims to trick users into believing that it is related to the Apple iTunes product. Arriving as a message with the title 'This picture never gets old', the message directs victims to download a file named iTunes.exe which, when executed, downloads a number of spyware applications, and joins the affected machine to a network of zombie machines also infected with the worm.
Results of a recently published survey suggest that a third of workers with email access are abusing the privilege to send inappropriate emails, even if it is expressly prohibited by corporate policy. Despite a third of workers abusing the system, only ten percent knew of a case where someone had been terminated as a result of the abuse of corporate email. Although focussing only on UK workers, it can be assumed that similar levels of inappropriate usage exist in other countries, and organisations.
Another recent UK survey is the latest in a number of surveys to suggest that online Identity theft concerns are forcing people away from online commerce. Around 15% of respondents have stopped making telephone-based purchases, with a similar number having stopped online purchases and banking. The younger age groups were more likely to have ceased telephone-based purchasing, whilst the oldest age group was more likely to have ceased internet-based commerce and banking. The survey operators indicated that these figures were of people who have completely ceased transactions through those mediums, and not those who had cut back a little. Even with these numbers, online purchasing is increasing in overall value.
Sticking with surveys dealing with online activities, a US survey suggests that around $200 billion USD are lost in productivity each year due to employees web surfing at work. The survey has been debunked by a number of groups, who counter claim that there is no empirical data to back it up. There are a couple of major red herrings which discount the validity of the results. The first is that it has been sponsored and published by the same company selling solutions to help cut down on web surfing at inappropriate times. The second is that the actual figures from the survey indicated half of what was highlighted, with the balance being made up by the impression of management.
A Ukranian man has been arrested earlier this month for his role in setting up a trading site for stolen credit card numbers at CarderPlanet.com. At its peak, the site had several thousand active members and was trading millions of account details. Despite the site having been closed since last year, law enforcement agencies continued to track down the site originators, with one of the high ranking members of the site already serving time in an England prison, having pled guilty to fraud and money laundering. The organisation behind the site was largely Russian and Eastern European and law enforcement agencies are suggesting that there may be more arrests to come in Ukraine with respect to the site.
In a case of excellent timing, ZDNet is reporting on AusCERT possibly facing an existence crisis with the suggested creation of a GovCERT body as a part of the Australian Government's response plan for 'cyberterrorism' attacks. The Full Disclosure mailing list recently had an active discussion regarding the usefulness of multiple agencies in the US, which has CERT, CIAC, CVE, ICAT, and US-CERT all operating in the same sphere. The introduction of a GovCERT alongside AusCERT should not mean that one has to cease to exist, as the US experience indicates that the multiple agencies each have their own focus, despite overlapping areas of responsibility, and can operate relatively well with each other. AusCERT took a defensive attack stance in the ZDNet article, basically claiming that because they were there first, GovCERT isn't required. Comments by industry analysts further down the column suggested that there is room for both organisations, although the current mindshare is with AusCERT.
The popular open source alternative Internet browser, FireFox, was subject to a number of minor vulnerabilities in the latest release (1.0.5), affecting a number of extensions and localisation issues. One of the most popular extensions, Greasemonkey, which is designed to allow client-side (on your computer) modification of web pages through the use of scripts, has recently announced a major vulnerability which affects all versions. Through the scripting capabilities, Greasemonkey can be used to automatically fill out forms, remove advertising, change links to pass through a proxy, or any other number of uses. Unfortunately, this ease of use is also matched with an ability to expose the contents of your local hard drive to any site that you are viewing through Greasemonkey. The exact nature of the flaw allows this to be done silently, irrespective of Operating System (Yes, it affects Linux and OS X systems as well as Windows), and opens the window for much worse exploitation by those with malicious intent. To overcome the issue, users should either uninstall the Greasemonkey extension, or up/downgrade to version 0.3.5, which disables the functionality that is exploited in this case.
Information Wants To Be Free - 18 July 2005
A lawsuit that has recently been reported on highlights just how much information trends towards being free over time. The Internet Archive project, and a law firm that was using the Wayback Machine for research purposes, is being sued by HealthCare Advocates. The concern is that the Internet Archive was illegally storing unauthorised copies of web pages in their archives.
While any number of Internet search engines store cached copies of web pages, the issue in this case is that the Internet Archive did not respect the robots.txt file present on the HealthCare Advocates site. A robots.txt file is a short text file placed in the top level directory of a website with instructions for web spiders and Internet search bots about which area of the site can be indexed, and those areas which should not be looked at. Some spiders do not respect this voluntary instruction file, and investigating the robots.txt file is an elementary part of web based hacking, as it provides the attacker with an easy indication of possibly sensitive areas of the site. Apparently the Internet Archive was not respecting the robots.txt file in place on the HealthCare Advocates site, although the evidence suggests that the robots.txt file blocking the Internet Archive was placed on the site following the start of the lawsuit and after the Internet Archive had already archived the pages.
Even with the robots.txt file and other systems in place, it is important to recognise that information will always move towards being free. Internet search engines will cache information long after it may have been removed from the active Internet, and specialist sites will cache / locally copy information that they feel is interesting. Placing information on a website on the Internet is a decision to distribute it, allowing infinite copies to be made of the information. In essence, it provides a tacit approval for localised copying of the information, even if instructions are given not to copy the information, such as through No-Cache headers in HTTP. No matter how well information is supposedly protected, nothing can effectively be done about the 'analogue gap' (or 'analogue hole'), where the information is rendered or otherwise presented for human consumption. If you can see it, it can be copied. Likewise, the best digital rights management solutions will always fail, due to this gap:
- Protected audio still needs to be decoded into a continuous stream for listening, and can then be recorded directly from the stream.
- Protected documents still need to be decoded for visual display, and can then be manually copied, or screen scraped to provide duplicates.
- Protected or streamed video can be captured following decoding by the video codec, or captured from a screen recorder.
A part of the problem comes from people who do not understand how the Internet actually works. There are people who still think that once they download something from the net, that it will not be available for anybody else, and that web pages are only sent out to the browser when a request is made, not available all the time - even if no one is looking at them, and when you close your browser the documents disappear. The difficulty comes in applying existing knowledge of data management and information replication to computerised and networked operations, almost all of the analogies used just do not work well. The problem with that approach, however, is that without the use of weak analogies, people with lower levels of technical understanding will never understand the technology, and this includes lawmakers, law enforcement, and the general populace. This approach leads to the legal decisions and lawmaking which appears counter intuitive to technically minded people, and misunderstanding of the application of such laws - such as the recent US Supreme Court ruling on P2P applications, which is still being reported as having made P2P applications illegal (which is incorrect).
Now that it is understood that information wants to be free, the idea of a national ID card system in Australia will, at one stroke, introduce the most desirable database for identity theft. With such an enticing database to criminals, it will only be a matter of time before it gets compromised, and the total database will be in criminal hands. It will be the greatest gift to identity theft. There will be naysayers who will deny that this is going to happen, but that stance is ignorant of reality. Last week's column provided an example where almost any personal information on Russian residents was available on the street, at public markets. The hacker arrested for breaking in to NASA and US Military systems claimed this last week that the systems were woefully protected, with one particular group of machines being installed with a blank administrator's password, allowing him to gain full control without actually needing any effort.
Although obscured in the PDF version of his indictment, the addresses of the attacked sites are fully visible when the information is copied across to another editor. This is merely the latest in a string of faux pas by companies and Government agencies that have failed to adequately obscure information before release. Another highly public example was the US report on the shooting of the Italian hostage following her rescue in Iraq. The full report was readable after copying it across to a text document, including the blocked out sensitive text.
Sticking with Australian news, an Australian man has been successfully found guilty for linking to sites which held illegal content (illegally shared music files). In addition, the ISP providing space and network connectivity for the site has been found to be a guilty party, as well. The case in question was actually mentioned a number of weeks ago in this column, and the ruling has now been handed down. In this case, the guilty party was knowingly linking to illegal content, and the ruling is in line with what the Dutch legal system seems to have taken in their approach, in that knowingly linking to illegal content constitutes an offence, but unknowingly linking to it does not.
This ruling actually has some pretty far reaching effects, if they are thought through. Search engines such as Google, Yahoo, MSN Search, Altavista, Sensis, and others, could be held liable for the sites that they archive in their indexes, as they must be aware that they are linking to illegal content as a part of their indexes. Already some foreign service providers are blocking Australian IP addresses from their services, though this is not widespread at this stage. This sort of decision is not actually without precedent. Major search engines have to prevent certain content from being presented to certain national IP blocks. For example, content relating to Nazis can not be presented to German web users, and the Great Firewall of China is designed to block content deemed unsuitable for Chinese web users, such as news articles critical of Chinese Government policy and actions. The adage that 'The Internet treats censorship as damage, and routes around it' is true, and this case will only result in a minor hiccup for the information flow dealing with illegal content. The multi-nation raids by law enforcement recently that shut down a number of warez servers (8 in total), will only result in a minor disruption of supply to the multitude of warez downloaders, as they adjust to alternate sources, and continue unabated. In the same way, shutting down sites hosting BitTorrent files which point to illegal content, will not work, as users adjust to sites located out of reach of the legal agencies chasing after them. A more practical example is spam email. Even with the US CAN-SPAM anti-spam legislation, the USA is still the source of more than half of global spam content. Relays in China are a popular choice for people who choose to operate out of reach of local law enforcement branches.
Related back to the original content on information wanting to be free is the idea of banning information. With the freedom of information that the Internet, and associated file sharing networks, provides, it becomes essentially impossible to filter all network traffic to proactively block content that has been banned, or is illegal. Actions taken by national classification bodies, such as the Australian Office of Film and Literature Classification, in banning movies or books automatically creates a demand for the banned content, as people seek to discover for themselves exactly what it was that got it banned. Likewise, attempts to ban or restrict access to information will eventually see it being set free by people who desire to handle the information, and who do not wish to see it suppressed. This can go too far, and descend into paranoia and conspiracy theory, and it only takes a cursory glance at major recent global events such as the September 11 events, and the invasion of Iraq, to see people who are endeavouring to free information, and those who have gone too far into conspiracy to claim that September 11 was co-ordinated by the US Government, although some of their other claims do appear to warrant further investigation. If it wasn't for people endeavouring to uncover information which others have tried to hide, the Iraq dossier would have gone unquestioned, the Downing Street memos would never have surfaced, and, going back in history, the Watergate affair would never have been made public.
With the above mentioned court case, the ruling has created an ironic situation, in that the published court documents have done more to draw attention to the illegal content, and advertised it wider than any of the efforts by the convicted parties. Some observers have suggested that the court be held to their own standards, and charged with the same offences for this action. Other observers have drawn parallels to the "Index Librorum Prohibitorum", the list of banned books by the Roman Catholic Church, which, itself, apparently is banned / suppressed, and by its very existence has created demand for the titles contained within.
In more positive news from the week, a 27 year old Chinese student has been arrested in Japan for hacking into a Tokyo travel agency's website and stealing personal data of about 90,000 customers. The breach happened back in March and was part of a wider spree which involved hacking into 14 business sites, and stealing half a million data entries. Apparently some of the data was on sold to unidentified parties, but it does not appear that any of the information has been used inappropriately ince its theft. The attacks were made possible through SQL injection attacks, a common form of Internet based attack against websites and web applications.
At the end of last week, the SpreadFirefox website, which is used to increase awareness of the alternative web browser, was subject to an attack via a published vulnerability in the Drupal Content Management System. Spread Firefox does not believe that any personally identifying information was compromised during the attack, and they believe that the purpose of the attack was to use the system for spreading spam messages.
In some fun news for digital rights supporters (not digital rights management supporters), an executive who believes the implementation of DRM solutions for his company's products is essential, has been caught out violating the DRM of another company in order to fix problems he encountered with their DRM and his ability to use their software. Although the position he was arguing from was that DRM that gets in the way is not the best form of DRM (so his stance on DRM hasn't really changed), the very fact that he took steps to violate existing DRM, and publicised the fact means that he appears to be wearing quite a bit of egg on his face. If he had stopped at the point of saying that poor DRM solutions are actually stumbling blocks to usefulness of a product, then nothing would have been out of line. The problem came when he took the next step and actually cracked the DRM implementation. The executive decided that HE was the one to decide that that particular implementation of DRM was not 'good', and because it didn't meet HIS requirements that it was okay to crack it for HIS needs. The problem with this approach is that he has placed his personal beliefs and convictions over what the law has set down, and it places him on the same level as every other 'pirate' who sidesteps DRM processes to get what they are after (such as music and movie downloaders).
More people are picking up on the adjustment by Microsoft's Anti-Spyware product with respect to the detection of Claria spyware. The default recommendation has now changed to Ignore, not Remove, which is causing rumours amongst various groups that Microsoft is losing the good faith that their Spyware removal product had established (prior to this it was one of the better spyware removal tools).
The Card Systems 40 million credit card breach that was reported on recently has moved into the news again, with a number of different groups claiming that they were the ones who were responsible for the fraud being detected and announced. This includes CardSystems, who claim that they delayed the announcement due to FBI request (the FBI deny this), MasterCard, who made the first major Press Release, and various Australian credit card providers, who claim that they alerted this breach back at the start of the year. Finger pointing and blame shifting is only going to get worse, as a couple of retailers are launching a class action suit against the major credit card providers and processors in an attempt to get them to accept liability for their actions, and provision of cards and services.
Keeping track and accountability of IT assets is an ongoing concern for a wide range of organisations and companies. The United Kingdom Ministry of Defence, in particular, has had a number of high profile losses in the past, and figures have recently been released that indicate that the various branches of the UK government have lost 150 computers so far in 2005. The Home Office is the greatest culprit, with 95 machines lost, and the MoD has lost 23 machines so far. Proper data management policies will mitigate the exposure of critical information through system losses, and companies should be able to identify what information is being stored on their systems. Unfortunately, history has shown that most companies are not aware of the information being stored on their various systems, and are unknowingly exposing themselves to potential exposure of this information (much like security of wireless access points).
An article on The Register discusses the significant issues caused by domain hijacking. A recent significant hijacking included anonymous webmail provider hushmail.com, while the sex.com hijacking is a famous case which took a lot of resources and time to resolve. The concern with domain hijacking is that seemingly legitimate domain transfer requests are sent to registrars seeking to move domains to new addresses. If successful, this then directs all website traffic to the new address, where the hijacker could be serving up malware, capturing visitor information, and denying service to the legitimate site. Domain owners should ensure that domain locking is applied to their registration, and that their registrar is to contact them via telephone any time that a change request is submitted. These efforts will not stop upstream transfers, but will stop most hijackings. Even if a site has been hijacked, the legitimate site will still appear at the original IP address, and can still be found via http://123.456.312/ , for example.
The Clock is Ticking - 11 July 2005
The clock is ticking on the time to mass exploitation for a couple of vulnerabilities released last week, in particular a Microsoft Internet Explorer vulnerability that has not had a patch released for, and may not get a patch released.
Various security companies have been reporting on the risks that a newly reported (late June) vulnerability in Microsoft Internet Explorer Versions 5.01 and above. The main problem exists in a COM Object, javaprxy.dll, which could allow for complete remote system compromise just by viewing a web page. Microsoft has since come out with an advisory, and it is also possible to disable javaprxy.dll usage with Internet Explorer, by editing the Registry. This last action should only be carried out by people who are comfortable with editing the registry, as it is possible that this action could destabilise systems.
The actual vulnerability announced is not merely limited to the javaprxy.dll, but extends to a number of of COM objects. With Internet Explorer, a website is able to access software and data stored locally through the use of Active-X and COM objects. The ability to view and edit Excel spreadsheets and Word documents in Internet Explorer is available through such mechanisms. Active-X has long been regarded as a significant security risk, and the ability to run Active-X controls unchecked in the web browser has lead to quite a number of exploits being made available for Windows based systems. According to SEC Consult, the company that claimed responsibility for discovering the vulnerability, the problem is a flaw in the way that Internet Explorer handles the creation of the link with the COM object. SEC Consult were able to find more than 20 COM objects that caused either Internet Explorer crashes, or memory faults. The javaprxy.dll was merely the first to be successfully exploited in the example provided by SEC Consult.
With the javaprxy.dll and PHP XML vulnerabilities that were announced recently, the ISC has been alerting readers that they believe a significant wave of attacks will flood the Internet in the near future. The ISC admits that they feel like they are crying wolf, as the more publicity that the vulnerabilities get, the more action that administrators will take to remedy them and the less likely it will be that the attacks will take place as believed. There has not been any indication that this is the case, yet. Having said that, it did take two months for Sasser and Blaster to make their appearances after the vulnerabilities were announced, and weeks after vendor patches were available.
The primary concern with the Internet Explorer vulnerability is that the discovery was only made two weeks ago, and Microsoft has yet to issue a patch, either for the javaprxy.dll, or the COM object vulnerability (the root cause). Because it was announced that more than 20 of the default objects available on a Windows XP installation are vulnerable, and the mechanism of the vulnerability was disclosed, it is possible that a wide range of attacks could arise in the near future, designed to attack these issues.
In addition to the javaprxy.dll vulnerability, the PHP vulnerability reported on last week has had exploits in the wild already. The failure in the PHP case was an inability to process input correctly. This allowed potentially malicious content to slip through, and be executed by the PHP parser.
Although the recent phpBB XML vulnerability has been remedied, a new vulnerability has been announced in the most recent version (2.0.16) which could allow for code of the attacker's choice to be run in the browser when a victim visits a phpBB site.
Of interest is the PHP XML vulnerability, which research has shown was first reported May 1, 2003. The person who made the report was not aware of what they had uncovered, only that they had encountered difficulties with one of their scripts, which was behaving in a strange manner. The exact problem they encountered was the one which has now been recently exploited as the PHP XML vulnerability. This is actually how a lot of vulnerabilities are found - a strange behaviour occurs, which the security researcher then investigates further and uncovers a security risk.
Sometimes software developers and hardware designers make decisions which do not appear to cause any problems initially, but are later found to be significant design errors. The whole Y2K bug issue was a result of system designs that only considered two digits to be sufficient for representation of years, which meant that the only century that could have dates represented was the 20th century. Another time related design issue which will be coming up in the future, is the Unix timestamp rollover in 2038. Unix-type systems, including Linux, and OS X, can use a timestamp based on the number of seconds since the Unix epoch - Midnight GMT, January 1, 1970. The actual valid date range that Unix-type systems can recognise typically covers from Friday, 13 December 1901 20:45:54 GMT, through to Tuesday, 19 January 2038 03:14:07 GMT. This date range represents the limits of a signed 32 bit integer based on the Epoch as origin.
The relevance to current software design is that some systems continue to use certain trigger dates as internal alerts. A recent example arising of the date 11/11/11, which a certain system uses to identify transactions that should not be processed. In the resulting discussion that followed, it was suggested that a lot of the Y2K solutions failed to move to four digit representation for years, and the actual solution applied was a 10 - 50 year offset. to the years being stored. What this means is that the Y2K problem for these systems has only been delayed, not solved, and they remain limited to a single 100 year period for date representation.
The tragic events in London last week have already drawn the attention of virus writers, although it has yet to spread widely. The email borne virus claims to be from CNN and claims to contain unique footage from the blasts, and even goes so far as to claim that it has been cleared by Norton Antivirus. At the time of writing, the virus does not yet have a name, and is designed to turn infected machines into spam zombies. Subscribers to the mailing list received early warning of this, and the advice in the following paragraph should be considered whenever a significant global event takes place.
Readers should be careful to avoid email attachments claiming to have content related to the recent central London attacks. Likewise, websites, other than the official sites, claiming to be collecting funds, providing lists of the dead and injured, or other details where they need your name, address, and bank account details, should be avoided. 419-type email fraud spam (Nigerian Spam) is also likely to increase in the near future, playing off the grief and shock surrounding the bombings in London, and the natural willingness to help that people have. The best advice is that even in this time of grief and shock, you shouldn't lose your ability to think critically about email content or website appearance.
A recent article on eWeek suggests that 9 out of 10 American Internet users has changed some of their online habits in some way as a direct result of fears surrounding spyware and adware. The report that the article was based on details the breakdown in changed behaviour. The specific examples given of changes by Internet users are recommended for all Internet users. This includes changing Internet browsers away from Internet Explorer, limiting use of P2P file sharing applications, applying caution to email attachments and avoiding untrusted websites.
Canadian newspaper The Globe and Mail is the latest in a string of mainstream media agencies alerting readers to the threats that electronic crime is presenting at the moment. Citing examples such as the recent 40 million credit card breach and phishing statistics, the article argues that the cybercriminals are operating with impunity, and the agencies and companies trying to prevent and mitigate online crime are barely keeping up. The concerns raised are valid, and it is vital for all computer users to be aware of their risks when functioning online, or storing critical data in computerised systems.
There are still many people who consider Identity Theft to be a non-issue, and that it only affects people who order things online, or who aren't careful with their credit card details. A recent article, by the Canadian paper The Globe and Mail, provides more evidence of what actually goes on with the trade of personal information that has been stolen. Although the article focussed on the trade in Russia, posters to Internet forums suggest that this trade is happening globally, and this is where the information from the major data breaches ends up - such as the recent 40 million credit card disclosure. There are a number of subscribers to this newsletter who do business with Russians, and we sincerely hope that they audit their personal data to ensure that they haven't been affected, at least not yet. Even if your identity hasn't been stolen yet, the legal trade in your details goes on, daily. Major aggregator companies, such as ChoicePoint in the USA, continue to sell highly sensitive personal information, with spyware companies such as Claria bringing up a significant new threat with their terabytes of information sucked from infected computer systems. There is also sufficient anecdotal evidence to suggest that anywhere that personal information is stored electronically, it isn't safe, be it a bank, merchant, or Government Agency - money talks, and money gets.
Infection Rates and Malware Evolution - 04 July 05
Thank you to the people who are becoming regular readers of this column - it is really making it a worthwhile endeavour to sit down and write them each week. Please don't hesitate to email with your questions or suggestions for future column topics.
To start off the column, there have been some fairly significant vulnerabilities uncovered recently in PHP, in the XML_RPC library which is a standard module for a lot of PHP installations. The ubiquitous forum software phpBB has fallen victim to this particular vulnerability, which appears strange, as recent versions were not susceptible, but the most current version (at time of writing) re-introduced the vulnerability. With the history of phpBB vulnerabilities, and remotely exploitable PHP vulnerabilities, it is expected that exploits will appear in the wild, with active attacks to commence in the very near future. Workarounds are available for site administrators, and it is expected that the main version of phpBB will be updated to fix this issue.
Computer Security company, Sophos, announced last week that the average time to compromise of an unprotected Windows XP system when plugged into the Internet is only 12 minutes. The ISC team at SANS argue that the infection time is more like 31 minutes, and others claim that it is as low as 6 minutes. Anecdotal evidence suggests that the time to compromise could almost be immediate, given the appropriate environment such as a hostile University network. As a thought exercise, it is possible to argue that infection rates are dependent upon infection mechanisms:
- Linear infection rates exist when manual human input is required for each infection, such as some website defacement attacks. This means that for any given period in time the same number of machines will be attacked and compromised, irrespective of the number of previous compromised machines.
- Exponential infection rates exist when automated attacks are the cause of infection, such as the Blaster and Sasser worms which affected default services on Windows. If it can be assumed that there is an unlimited number of new machines that are vulnerable to attack, then an automated attack will result in this sort of infection spread. Automated attacks lead to exponential infection rates as each infected computer contributes in launching attacks against new targets.
- In a real world environment, the infection rate is probably more like a modified Sigmoid function. A Sigmoid function returns a curve which looks like an 'S' and represents a near exponential infection rate initially, which then slows down in a near exponential decay. What this sort of infection rate represents is a rapid initial automatic infection which is then controlled and mitigated by network administrators and network protection tools. If it is being graphed as number of infected machines against time, it might look something like a bell curve, with a drop in total machines infected once the administrators find a way to defeat them. If it is being graphed as percentage of possible victims infected against time, then it will look more like the 'S' curve.
Some news organisations were reporting early last week that the US Supreme Court had set down a ruling that companies that created peer to peer applications were to be held responsible for the use of the applications to download material that infringed copyright. This interpretation of the ruling is actually incorrect. What the US Supreme Court established was that it was illegal to promote the application usage for copyright infringement / illegal activity, and that the developers could then be held liable in that instance for infringement by third parties that used the application.
A case developing in Australia along similar lines has seen two ISP administrators being sued for apparently ignoring peer to peer activity on their networks, and ignoring the infringement notices that were sent to them. The administrators are being sued on the basis that they allegedly maintained a Bit-Torrent hub that was used to exchange music files. An initial court ruling found that their actions were within the limitations of their job specifications, but that has been overturned, allowing the current action to take place. The legal team for the ISP claim that the customers were responsible for maintaining the hub, not the ISP. The reporting on the Supreme Court decision was fairly widespread, with a lot of differing opinions being presented, twisting the ruling to their own viewpoints. Representatives of online music downloading services went on record claiming that the ruling meant that the use of P2P software was illegal, but these reports didn't appear to refer to the actual ruling.
In fairly big news from the mid week, Pakistan's undersea Communications cable suffered a failure such that all Internet connectivity to the nation was severed (internal networks continued to function). According to the news reports, the failure was due a power failure. Secondary connections are being planned through India, and connectivity was maintained through backup satellite links, which could only provide 10% of the original bandwidth. There are quite a number of countries which make use of the cable, including India, Dubai and Oman. With repairs to take up to two weeks, it is possible that the UAE and India will encounter connection disruptions while the cable (SME-3) is fixed, as they also utilise the cable for connection. Pakistan was not the only country to encounter difficulties with undersea connections recently, when New Zealand suffered difficulties when rats destroyed a cable at the same time as a second cable was offline for maintenance, and last November the undersea cable supplying Colombia and Ecuador with their access was severed.
Microsoft is apparently in talks to buy Claria, a well known spyware company initially called Gator. Rumours abound as to what the likely outcome from this potential acquisition will be. One of the more sane suggestions is that Microsoft will be using the terabytes of personalised computer use data that Claria has acquired, along with the Google Adwords equivalent that Claria has hinted at developing, to aid MSN in 'catching up to Google'. Other rumours suggested that some Claria / Gator technology would be making its way into the next Windows release as a means to ensure only legal copies of the software get installed.
Staying with the Microsoft news, and reports are rising again of trojans that claim to be legitimate Microsoft patches. This issue has been reported on and off for quite a while, now, and it should be considered that this is an ongoing issue which will continue to plague whichever is the most common / prevalent Operating System. Other recent reports, again largely affecting only Microsoft systems, indicate that various malware are evolving in their ability to cause mayhem. One of the more recent evolutions in the dirty tricks war has been an ability for malware applications to identify and disable antivirus applications, firewalls, and other protective software. This is now apparently progressing to malware engaging in packet filtering. A packet is a small parcel of information that crosses a network, and many packets combine to carry the information that has been requested. In packet filtering, an application watches the network packets that go back and forth, and identifies certain behaviour / content that is of interest for subsequent processing. What the malware is tending to do is drop (do not allow to pass through) packets to and from antivirus autoupdate sites, or Operating System update sites. This behaviour doesn't tend to require modification of system files (such as the modification of host files which is a common technique), and is more difficult to identify and troubleshoot.