Skip to main content.

Terms of Use

Sûnnet Beskerming Pty. Ltd. occasionally produces small reports that are for free (gratis) distribution. The free content may cover any area that Sûnnet Beskerming operates in. Examples may include generic security advice, specific security warnings, development practices, and application tuning. The only caveat on reuse of information from this site is in accordance with the following paragraph.

Use and reuse of information from this site requires written acknowledgement of the source for printed materials, and a hyperlink to the parent Sûnnet Beskerming page for online reproduction. Content from this page can not be reused in a commercial context without negotiating an appropriate licence with the site owner. Personal and educational use is granted without additional restriction beyond an amount in accordance with the principle of "fair use". Fair judgement is encouraged from site users as to what amounts to "fair use". Please contact us if you reuse our content, so that we may be able to provide more specific advice when necessary to improve your reproduction.

If you are interested in any of our other services, information about them is available from the parent site - Sûnnet Beskerming - Information Security Specialists.

'Tis a Fragile Thing - 31 October 2005

After the peering point between the two US Tier 1 ISPs, Cogent and Level 3, was recently cut (see columns from two and three weeks ago), it started a lot of people thinking about the fragility of the Internet, and what could happen in the future if the gentlemen's agreements that keep the backbone operating cease. A little over a week ago, an unintentional outage affected a large number of American Internet users. Tier 1 ISPs Verio and Level 3 were affected this time, with routing services going awry, effectively making their complete networks vanish from the Internet.

The actual equipment that makes up the core of the Internet, the network routers, rely upon routing tables to direct the traffic that flows through them. These tables are like mini network maps, and allow the routers to pass the information to the appropriate system so that it can be passed to the destination as efficiently as possible. In the recent outage, it was claimed by Level 3 that a software upgrade had damaged the routing tables, which resulted in the effective blockage of the information that was passing through their network.

Technically, the routers started to 'flap' as they cycled between two invalid routing tables in a vain attempt to get the traffic they were holding to a valid destination. As routers went offline, it started a cascading failure, with more routers unable to pass traffic to a valid location, which then forced them to flap between their alternate routing tables, which also became invalid, taking them offline as well. The total outage was around 2-3 hours, as by that time other routers had updated their tables to route around the Level 3 and Verio networks, and the Level 3 systems were being brought back online. The remaining network segments would continue to carry increased load until the full systems could be re-established.

While this is the second time within a month that Level 3 have been involved in a blackout of a portion of the Internet, they are not struggling to keep up, just yet, but they are reported to be suffering financially even with a stable revenue and customer base. The bigger problem is for the lower-tier ISPs, who had to explain to irate customers that the Internet was down (as far as they could go), and that there was nothing that could be done about it. Unfortunately, their reputation and reliability would have taken a hit, and it is possible that these ISPs have lost customers as a result.

Some observers claim that these recent events are strong cases for ensuring that network access for large companies is 'multi-homed', which means that their network traffic can pass through a number of Tier 1 providers when it arrives on the Internet backbone. This allows for continued network access in the case that one of the Tier 1 providers suffers an outage. Selection of a multi-homed ISP as the network connection for end users will also help maintain network connectivity in the case of an outage.

When the Internet is up and running, one of the increasing uses for it is online banking / financial transactions. The US Federal Financial Institutions Examination Council (FFIEC) has recently called for US banks to have implemented two factor authentication for online banking, by the end of 2006. It is expected that the most common implementations will probably be a variant of the key fob model, as applied by RSA Security's SecurID, or a one-time password model as applied by a number of European banks.

Concerns have been expressed about the cost and reliability of these added features, and whether the costs will be passed through to the consumers in the long run. The security of most methods has also been called into question as both methods of authentication will travel over the same channel (i.e. the Internet). This single channel of use still allows for spoofing attacks (phishing), where the attacker needs to position themselves between the bank and the user, pretending to be the user to the bank, and the bank to the user. Until recently, it was considered impractical to perform this sort of attack against the newer two-factor authentication models, however a successful phishing attack in Europe has shown this to be false.

Proper two channel methods, such as using SMS or voice calls, introduces significant overhead to the transaction process, which ultimately is likely to be passed on to the client, and can still be bypassed through social engineering or the theft of the mobile (one of the most commonly lost / stolen personal items).

By making random capture of this information more difficult for attackers it makes it more likely that any time that the security model is breached the banks will blame the victims for failing to apply the security model correctly. While an increased level of personal responsibility will go a long way, it is still important to consider that attackers can still operate happily. Human nature being what it is, any time that access controls are tightened, more people will go out of their way to make their access a little easier. That might mean taping their password to their key fob or password sheet and carrying it with them everywhere (all they need to do is lose it / get mugged).

Cynical observers have pointed out that it may not be for the best interest of the consumers (that is only a happy coincidence), and that it is the insurance behind the banks (FDIC) that is pushing for the improved security models, ostensibly to protect the financial institutions against the losses. Unfortunately, the methods only improve the complexity of the required attacks by a minimal level.

It is also highly likely that this will see a new form of phishing attack introduced, the real time attack, where the attacker is actively manipulating the account while the user is trying to log in (a derived man-in-the-middle attack).

Armchair Spies - 24 October 2005

Originally starting life as the DARPA-Net, and a combination of University networks, the Internet has grown over the last few decades to become the massive global network of information that it currently is. The content of this network, freely available to many, contains a lot of information that allows for espionage in relative anonymity. All it requires is a network connection, and a willingness to look. Information that was previously only open to state sponsored intelligence agencies, particularly those with space based imaging platforms, is now freely available to all Internet users.

The presence of search engines such as Google, Yahoo, InfoSeek, and a number of others that claim to index the Internet, makes accessing this information a much simpler task. The use of software programs variously known as spiders, crawlers or robots, allows these companies to build searchable, indexed, databases of content that appears on the Internet. Technologically unimpressive, these automated tools unearth incredible amounts of information which otherwise would be best hidden.

Unfortunately for those who are trying to prevent access to their sensitive information, it is extremely easy to misconfigure the interfaces that systems have with the Internet, unknowingly exposing sensitive information that should really be hidden. For hackers who seek to exploit this information, the use of the various Internet search engines forms an essential component of their hacking toolkits. The caching of web content by the various search engines can allow a remote attacker to fully research a target without actually contacting the victim's systems or sites.

Governments will always be engaged in activities which they do not want disclosed for fears that disclosure will harm national security. These protected activities will always attract attention from a section of the general population who have dedicated themselves to discovering the specifics. The Internet allows these people to coordinate their efforts more easily, provides a means for more rigorous fact checking, and allows a far wider distribution of the information than would otherwise be possible.

Such information access means that industrial, national and military espionage activities are almost trivial to undertake; a target may never know that they have been selected for espionage, and may never know what information has been discovered about them. Historically, only State agencies have had the resources to conduct this level of espionage, but now it is available to anybody with a network connection, access to a public library, and a critical mind.

For example, space-based imagery had only been available to countries with a space launch capacity, and their trusted allies. The late nineties saw commercial providers entering the space imagery industry, and the major powers sought to restrict the level of detail and distribution of content that these providers made available. Sites such as the White House have been modified to reduce detail in freely available imagery from a number of online sites. This is only a stopgap measure, as the section of the population who seeks out the protected information will devote themselves to obtaining unmodified imagery, or even directly observing the locations and reporting on the details.

Now, high resolution space-based imagery for significant areas of the globe has been made available through online services such as maps.google.com. This has caused concern from a number of nations, including the Netherlands, Australia, North Korea, South Korea, and reporting that Russia is concerned as well. Initially the images were 2-3 years old, but important buildings don't tend to move great distances in such a short timeframe. The coverage and resolution offered by this service, and others, means that military installations, critical infrastructure, communications arrays, and other sensitive locations are now freely viewable, sometimes at resolutions approaching 1m per pixel. It is possible to determine working flight line layouts for military installations (practical examples being F117 and SR-71 flight lines in the USA), military dockyard layout and facility usage, and ground based unit distribution. With rudimentary knowledge of antenna theory, it is possible to determine operational frequency bands, ranges and operational limitations for RADAR and communication arrays, including system weak points for disruption of services.

As some observers have noted, you can't blame technology for terrorism (being the current enemy of the month). Historically, militant organisations have shown a willingness to adopt new technologies in an effort to gain a technological advantage over their enemies, who might be held to a rigid bureaucracy which slows their technological uptake. Technology which offers anonymous delivery and distribution of their messages, especially to a global audience, is particularly valuable, and it forces existing intelligence bodies to re-evaluate their threat matrices to account for these new vectors.

Following on from last month's column on Techno-Arrogance, it is an opportunity to put forward some reasons why there is such a discrepancy between the technology aware, and the technology naive.

Arguments that the technology naive are 'stupid people', or 'ignorant by choice', is very demeaning and reflects an inability to communicate by the technology aware (although there are some who will never listen or learn). One thing which tends to be forgotten by the technology aware is that their technical expertise is far above the average, and what may be obvious to them is not necessarily obvious or straight forward to everyone else.

When a well designed phishing email arrives in the inbox of the technology naive, they can be easily fooled, and even the technology aware are not immune to a well crafted attack. Analysis of the source of an email, and an ability to interpret the results should not be a necessary part of receiving email, but it is rapidly becoming an essential skill when sorting through email. Unlike a telephone call or a letter in the mail, it is more difficult to determine a falsified email when trust has been placed in the information medium.

A random person on the street who is claiming to be from your bank is no different to the phishing emails that many people receive, but it is much easier to determine that the person on the street is not legitimate (at least for the technology naive). Perhaps that is the key to user education, association of emails with random people walking up to them on the street, and learning to delete and ignore them.

Even the hiring of 'technical' staff by the technology naive can be an eye-opening experience. It has been said that when an employer requires certain technical certifications (e.g. CCNA, MCSE, CISSP) to be held by employees, then it is possible to apply a generic technical model to that company:

This generic model will not always apply, and not always apply completely, but it does form a minimum standard of what to expect for people who are seeking employment in technical areas, and which can point out the difficulties relating non-technical business operators to the highly technical employees.

This argument has also had some significant discussion recently, as the CISSP (Information Security Certificate) has come under fire for no longer being a relevant certification, with the 'certificate mills' pumping out CISSP holders who are diluting the perceived image of reliability that the certificate once attracted. Microsoft's MCSE and Cisco's CCNA have also been through this argument in the past, although the higher ranked certifications, such as Cisco's CCIE certification, appear safe, at least for a while longer. The SANS GIAC certification, which is still held in high regard amongst many technical people, is also headed to a similar fate with recent changes in the requirements that must be met to proceed along the path of obtaining certification.

Liability? What Liability? - 17 October 2005

Many industries and professions have strict liability and accountability regulations, which can significantly increase the costs of doing business, and which provides a place to assign responsibility should something go significantly wrong. Strangely, the software development industry tends to avoid liability for errors in generated software. The Therac-25 is one known example where software errors can be directly linked to human deaths. Likewise the flight control software in early Airbus fly by wire models could be argued to be responsible for human deaths (although it can be counter-argued that it was the human error of not knowing the software operating boundaries which caused the fatal crashes). In the End User Licence Agreements (EULAs) that are presented to the user before initial installation or use of software, software vendors will tend to claim that they have no liability or responsibility for the effects from their software on the end user's system.

Mentioned in columns in May and August, EULAs can introduce problems for end users / clients of the vendor. While end users can shoulder some personal responsibility for acquiring and installing the software, who is to be held responsible for vulnerabilities and exposure to risks from flaws in the software? As mentioned above, the software vendors sidestep this liability through the use of the EULA, at least for consumer level software. For the right amount of money, vendors will develop software to appropriate standards, and provide a level of accountability for flaws within it. It is essentially offering insurance for problems caused by flaws in the vendor's own software. Going further, and for even more money, it is possible to develop software which is near-perfect, and which the vendor supports completely.

The problem with the above argument is that the market has set its own baseline for what is an acceptable level of software performance and stability, which is unfortunately quite low. Better software takes time, and by the time that a polished application is ready, it is highly possible that the market has already been saturated by sub-standard offerings from other companies. While it is fashionable to complain and whine about the vulnerabilities in commercial software, the market only has itself to blame, forcing the current situation when it chose the lower priced vendors.

Discussion about where liability rests for software vendors has come at an opportune time, with Microsoft releasing their October Security patch updates last week. With nine patches released, it is strongly recommended that all people who use Microsoft Windows update their systems as a matter of extreme urgency. A number of the vulnerabilities can be remotely activated to completely compromise a vulnerable system.

Exploit code is already circulating for some of these recently patched vulnerabilities. Initially it had been reported that exploit code was already developed at the time of patch release, but it was contained to one or two online locations. Exploit code has escaped this boundary, and is starting to appear in numerous locations. It is not known at this stage whether the code has mutated into an active exploit for delivery, but it is feared that an automated attack based on MS05-051 will only be a matter of days away (at most).

The most critical security patch, MS05-051, has also caused issues for some users, with their systems effectively becoming unusable following application of the patch. Although Microsoft have now provided information on their site for correct application of the patch, and resolution of the issues which could arise, it is possible that many people will uninstall the patch, or delay application unnecessarily, due to the added administrative burden required to properly configure the systems.

A major spike in traffic on the affected network port appears to have been arrested over the last couple of days, but it is possible that the activity represented probing actions, with the proverbial calm before the storm as the US and Europe pass through the weekend, and a mass exploit due in the very near future.

It is considered probable that an automated worm will appear soon, and will begin mass infection of systems, with the peak of the attacks hitting at the start of the US working week. It has also been disclosed that there are some low threat malware applications which remain functional even in Windows Safe Mode, which is used to effectively remove embedded malware. It is predicted that this sort of capability will only become more common across the various malware vendors and applications.

The vulnerabilities currently being targeted as a result of Microsoft's patch releases are not the only major threat that is facing the Internet over the next couple of months.

The ongoing argument between the Tier 1 ISPs, as treated in last week's column, continues to simmer, public exploit code has been released for a month old Cisco router vulnerability, and a brewing argument between the European Union and ICANN (a US organisation which manages / assigns IP - DNS relationships) over control of DNS could threaten the future operational structure of the Internet.

Essentially, the EU is arguing that the US should not have sole control of the top level of the Internet, and that it should be passed to a more global organisation (such as them). The risk is that if a consensus is not agreed upon at talks in Tunisia, it could see competing top level records established by ICANN and the EU. If it is mandated that EU companies have to register with the EU servers, it could see situations where sites such as telekom.de point to two completely different addresses, depending on which set of address servers you use. While it is more likely that in this situation most companies will just register the same data with both authorities, the problems surface when there is a dispute, or companies are too slow to establish the duplicate records and a competitor or squatter get the records first. Given that there is no compulsion to use any of the existing services, it will be interesting to observe how this change is mandated, and enforced.

The news from last week's column about the English Security contractor who was convicted of 'hacking' a donations website has continued to inflame emotions within the Information Security industry. Daniel Cuthbert was charged under the UK Computer Misuse Act for 'hacking' a website which was accepting donations following the 2004 Tsunami. A number of people have stepped forward to provide character references in an effort to provide at least some balance to the situation and reporting. It has also come out that Daniel is involved with OWASP, an industry group concerned with improving web application security knowledge and implementations, leading the London branch of the organisation, and he will be presenting at the upcoming OWASP conference in the US.

Unfortunately, it appears that Daniel's Information Security career has come to a sudden end as a result of the court case. It also appears that the case has caused a schism between Information Security researchers and law enforcement agencies, quite the opposite result from what was probably desired from the case. Unfortunately it has also caused deep division within the industry as various groups align on the different sides of the argument (that he did / did not do anything wrong), and it is likely that the current arguments will leave at least some lingering effects.

Losing a skilled Information Security worker could be seen as a strange move, given a recent survey conducted by IDC, on behalf of Cisco, of various CIO's across the UK and Europe. The survey has found that the respondents are expecting a shortfall of skilled workers as soon as 2008. In the UK it is estimated that as many as 40,000 positions may be unfillable, and across Europe the figure may be as high as half a million. A Cisco spokesperson indicated that these positions will not be general IT positions, but rather they will be highly specialised positions, such as Wireless technology (802.11, Bluetooth), and security. It has been said for a while now that Australia faces a critical shortage of skilled Information Technology personnel, amongst a greater shortage overall of skilled workers in the workforce. This latest survey indicates that this problem appears to be facing other countries as well.

Perhaps some of these skilled positions will be in protecting mobile gaming devices. Following on from news that the Sony PSP has been affected by malware which makes infected units unbootable, it has been disclosed that the Nintendo DS (a competing hand held gaming system) now has its own malware which also makes the unit unbootable. Known as DSBrick, the malicious software poses as user developed code, and overwrites various sections of memory in order to make the DS unbootable.

Sometimes, the extra staff will not necessarily prevent bad things from happening. The most advanced online banking security model currently in use (the equivalent of a one time pad) has recently come under extra attention from phishers. A Swedish bank was recently targeted with an attack which requested users enter the next several codes from their appropriate pad, providing the phishers with a limited number of attack opportunities. While increased user knowledge would avoid such a blatant attempt, a well designed phish, which claims that the first code was improperly entered, will be more likely to successfully obtain details that can be used to access the accounts of their victims. This is the first attack of this type to be reported on, although it was predicted quite a while ago (by us). The Sûnnet Beskerming online banking security products, and model, provide security that can not be defeated through tricks such as this, and which are immune to all known phishing attacks and predicted attack models.

Finally, a threat which is not related to any Information Security threats, but demonstrates well in a slower time frame how Information Technology systems tend to get infected. The current strain of bird influenza, H5N1, which is causing concern globally for its potential to be a repeat of the 1918 pandemic (actually, much, much worse than 1918 - which was also a bird flu variant), appears to have mutated to be immune to the most widespread pharmaceutical defence against it that exists, Tamiflu (oseltamivir). It has been reported that a Vietnamese girl who was infected with the virus showed no response to the drug when it was administered to her, instead her body developed a much stronger, drug resistant strain.

The current spread of the disease through wild and domesticated bird flocks, and the last effort attempts to control the spread, have a parallel in rapid spreading computer based worm / malware infestations. An initial host is infected intentionally by the malware author (natural virus mutation in the birds), which then seeks out other vulnerable hosts in the nearby environment (same for the birds). Once a certain critical point has been reached, the infection escapes the initial contained environment and starts attacking more remote hosts (same for the birds).

From here, the infection saturates local vulnerable hosts and continues to spread along network connections (migratory paths for birds). As defences are activated, such as network disconnection, firewalls, IPSs, active sysadmin actions, and non-vulnerable hosts, the infection rate slows and eventually subsides to background noise as all vulnerable hosts are exhausted. The same thing is being observed with the bird flu, with continued global spread into Europe despite slash and burn tactics against bird flocks. Countries not yet infected are starting to take action, such as the Netherlands dictating that all domesticated birds must be kept indoors, and others continue to be protected by geography (such as Australia and New Zealand), despite the geographical proximity of confirmed human cases (Indonesia).

If and when the virus mutates and blends with the existing human influenza, it is probable that an extremely virulent influenza that is transmissable from human to human will be formed, and the pandemic will have started.

Much to Think About - 10 October 2005

Microsoft has released an advanced notice that there will be nine patches to be released with the October Security Patch release (Tuesday). Eight of the patches are for the core Operating System, while the ninth is for Microsoft Exchange, as well as the core Operating System. Given the short timeframe encountered between patch release and exploitation for the August patches, it is very strongly recommended that all Microsoft Windows users apply the patches as a matter of some urgency, following their release. Most of the patches will be rated critical, which is Microsoft's highest vulnerability rating.

Following on from their recent attacks against OS X and Open Source Software users, Symantec has brought a complaint against Microsoft to the European Commission Anti-trust regulators. The cause for this latest maneuver is the proposed Anti-virus and Anti-spyware offering from Microsoft, which is expected to reach specified clients by the end of 2005, and is expected to be bundled with the next version of Windows, Vista.

Symantec's complaint is that Microsoft is about to abuse their monopoly position, to the detriment of the other companies (in particular Symantec) in the Windows-based Anti-Virus market. Observers suggest that it is a sign of Symantec losing their cool with the situation, given their historical revenue base in the Anti-Virus sector.

While it is not likely that Symantec is about to be shut down (they have significant income from other divisions), they definitely appear to have been rattled. Other observers are not surprised, given Microsoft's historical trend of unethical business practices when it comes to business partners.

Unfortunately, Microsoft has always been in a tight situation with their monopoly position. People are ready to complain about the poor security record of Microsoft's products, but when they make an effort to release a product which addresses the issues, more people complain that they are shutting out the other companies that have established themselves in a niche position to address the deficiencies of the Microsoft products. The problem is that Microsoft tends to bundle their software in with the Operating System, which effectively shuts out the commercial competitors. If they released their products on a commercial basis, then it is likely that most of the issues would be resolved.

The upcoming Microsoft Anti-Virus solution is too late for the current exploits doing the rounds. Exploit code for a Windows XP Wireless Self-Configuration Service vulnerability has surfaced and it is considered only a matter of time before an automated tool appears to allow attackers to easily attack Windows XP systems on wireless connections. It is very strongly recommended that all Windows XP system users who connect to the Internet through a wireless connection consider the use of a wired connection until Microsoft can release a patch to resolve the issue.

What appears to be a well developed version of the 'Sober' email worm appeared towards the end of last week. Under the new CME numbering system, the version has been allocated the identifier CME-151. Initial research suggests that the worm selectively chooses email addresses from vulnerable systems, rather than blasting all addresses found. Appearing as both German and English variants, the software attempts to turn vulnerable systems into spam spewing zombies as part of a hacker's bot network. The email that it arrives as will purport to either be with respect to a class reunion or deal with password changes. Either way, it is a constant reminder for users to be extremely cautious about opening attachments received in email.

A potential outcome of worms like Sober is a so called zombie network which the hacker then has complete remote control over. The Dutch have recently closed down a zombie network of more than 100,000 systems, arresting three men over the issue. According to the news reports, the particular malicious software that was being used in the case was Toxbot, a piece of malware which affects Windows based computers. In addition to providing control of the system to the remote hackers, the Toxbot malware also included keylogging capabilities, and the accused are suspected of gaining access to several eBay and PayPal accounts to perpetrate financial fraud.

The zombie network was used in at least one Distributed Denial of Service (DDoS) attack against an unnamed US company, with the possibility that many more attacks were carried out using the network. Dutch law enforcement, in conjunction with a number of ISPs, have dismantled the network, preventing further abuses from that particular zombie network. While it may only be a drop in the ocean, it's a good start.

Windows based systems have not been the only devices which have faced threats from worms over the last week. Following disclosure that an exploitable buffer overflow existed in the firmware for the PlayStation Portable gaming system, a small community of people excitedly began developing tools and software to allow users to develop their own content to run on the PSP. In recent days a new software tool, claiming to be from the 'PSP Team', has surfaced which deletes essential Operating System files once it is executed. This normally results in turning the PSP into an expensive paperweight, as the device is rendered unbootable. It is not known whether there will be any official support for users who have encountered this issue, as it requires users to actively seek out the software, download and execute it, before it cause any damage.

Even with the new vulnerabilities being exploited, new email worms surfacing, and other threats facing online users, sometimes the greatest threat comes from those who control the actual hardware which comprises the Internet. Although the Internet is ideally a 'mesh' network for maximum flexibility, in reality it is more of a 'distributed' network - with a number of key points where the majority of the Internet's traffic flows through.

The owners of these key points are generally known as 'Tier 1' ISPs, major providers to smaller ISPs and other customers - essentially they own the key infrastructure that the Internet relies upon. Starting early last week, a disagreement between two of these Tier 1 ISPs turned nasty, when one shut off a 'peering point' - where the information flows from one ISP to another, with another Tier 1 ISP. The risk is that end users of the two ISPs (or customers of the ISP clients) will find a whole swathe of Internet address space will no longer be available for them to view (even with the proper IP addresses). It is not known whether the two ISPs (Level 3 and Cogent) have resolved their differences, or what the long term effects will be. Calls have been made for the management of the higher level services to be government controlled / regulated.

In the short term, the connection has now been re-established, but the warring parties have only been given a month before the larger ISP closes the connection for good. At the heart of the issue is claims over payment for the bandwidth being used, or rather, the lack of it. The practical effects of the peerage point closure has been felt by many Internet users, with site load times extended significantly as a large chunk of the Internet network space has vanished, and a number of sites completely unavailable, while a large number of the warring ISPs' customers have effectively found their Internet access shut off.

The Chief Information Officer of the US Army has recently released a plan which provides 500 days to upgrade the US Army Information Technology Internet-based infrastructure to provide combat troops with better combat communications. The LandWarNet system, a part of the US Department of Defence's global information network, is set to provide upgraded voice, video and data capabilities to local commanders and individual troops.

The 500 day schedule is extremely optimistic, and there was little coverage of the issues that are plaguing the existing information infrastructure that the US military uses.

While the concept of network-centric warfare is one of the goals that most Western military forces are headed towards, the practical implementations to date have been less than impressive. With everything on the battlefield networked, available bandwidth and information overload become a significant factor.

During the initial phase of operations in Afghanistan, the US discovered that their bandwidth capability was rapidly saturated, even without all of the fancy network-centric warfare devices online. The other concern is what happens to all the content that is pushed through the bandwidth - the human processors of the information can not be upgraded to process the information faster, and electronic devices have their drawbacks in terms of raw processing power, so useful information is lost, and the risk of friendly fire incidents increases.

Unless the US military is maintaining its own separate truly mesh network, this new plan runs the same risk of disruption that was recently experienced by the civil networks with the Tier 1 ISP arguments.

A fairly controversial argument that originally started in August has resurfaced, and has attracted some fairly interesting discussion. Originally a discussion about the validity of the procedures currently in use by law enforcement agencies in terms of Information Technology forensics, discussion has re-emerged suggesting that a number of suicides have been linked to improper forensics work surrounding the cases identified.

It was suggested that the suicide of a senior British Naval figure in Gibraltar can be directly linked to the poor forensic data, and the leak of unverified information (which was later quashed). Other contributors to the discussion suggest that suicides in Italy have been linked to similar incidents. The claims have been dismissed as fanciful by other contributors, but it does at least appear feasible that the claims are true.

A current case which appears to follow the idea of poor forensics, is one which follows on from news earlier this year, where an English Information Security contractor was arrested for using a command line browser to access a website that was established to accept donations following the 2004 Tsunami. A guilty conviction has now been handed down to the accused. Strangely, the conviction is under the Computer Misuse Act, even though the actual justification returned for the guilty verdict was 'lying' to police following the initial arrest.

The verdict has worried a lot of observers, who believe that the charged individual only took reasonable steps to verify the authenticity of the website when he became concerned that his donation was being sent to a phishing site. It has not been determined whether there was any material change to the statements that were initially recorded, and it is worrying many more observers that the accused's various statements were similar, differing only in usage of technical terms and minor details. The concern is that the various law enforcement agencies are not adequately trained to differentiate multiple different ways to describe the same technical procedure.

For readers who wish to discover some of the details behind the case, the website in question was http://www.dec.org.uk, a website that is managed by British Telecom for the Disasters Emergency Committee in England. While bad web design is not a crime (yet), the poor design of the main and donation pages is a good first indicator that a site requesting money may not be all above board. The second strong indicator is the change of domain between the main dec.org.uk site, and the actual payment site, hosted on securetrading.net.

An initial step to verify the identity would be to look at the records of who owns the domain dec.org.uk, which checks out. The payment page (securetrading.net), on the other hand, claims to be a UK company, but doesn't hold a .*.uk domain, was registered outside of the UK, and which has obscured details in the records of domain ownership. According to those who have stepped through the process, checking of the records at Companies House (where all UK companies must be registered) for securetrading.net shows that it is owned by another company, UC Media, which doesn't really exist, except as UC Group Ltd. To make matters even more interesting, UC Group has two insolvency notices listed against it in the Companies House records.

Having just provided approximately $100 AUD in donations (and a credit card record) to a site which doesn't appear to be legitimate, and having done so just before a long weekend, the convicted individual could reasonably expect to be concerned about where his records were going to end up. The 'hack' that he is claimed to have used is a simple URL modification to verify that the parent directories do in fact belong to the same site. Many phishing sites will fail this simple test, and this will generally provide a means to determine which institutions are targeted by a phisher, as well as identifying who owns the top level site.

There are two key lessons to take from this. Firstly, don't lie to law enforcement agencies, and secondly, be very careful when typing in website addresses by hand, you may trip an over-sensitive security system and end up in handcuffs.

While too late to help out in the above case, an interesting article was recently published which outlines a basic approach to profiling phishing attacks. By comparing the details of new phishing messages, along with the sites that they link to, to previous known attacks, it is possible to build up a basic appreciation of repeat phishers and emerging techniques that are getting widespread usage by phishers.

Although this can not readily be turned into a positive identification of the actual people behind the attacks, it can be used to identify collaborative efforts, and sources.

This sort of technique works well because humans are creatures of habit, and naturally lazy. Once they have found something that works, and are comfortable with a few approaches that have been known to be successful, the phisher is more likely to reuse the same techniques in the future, with only a minor modification. Similar techniques can be applied to website defacements, tracking and identifying which groups are actively defacing websites, and how they are achieving it.

Finally, a German security researcher who is well known for his research and disclosure of vulnerabilities affecting Oracle products has provided in depth details on a number of vulnerabilities which affect different versions of the Oracle database product line. The rationale behind the disclosure appears to be a lack of credit from Oracle for his efforts in discovering the issues, along with the long lead time between the initial notification to Oracle, and a patch being released which addresses the issue (up to two years in some cases). The issues range from Denial of Service attacks, through to disclosure of sensitive information.

Who Needs Enemies? - 03 October 2005

Sometimes patch releases don't always go as planned. Following a release by Microsoft last week of patches for Internet Explorer 6, and DirectX 8, it was discovered that the patches were identical to those released in May 2004 (Internet Explorer), and August 2003 (DirectX 8). The 'Release Date' and 'Date Published' for both updates showed a date from last week, as well as Microsoft's Download Notification email which was sent out. At least the patches were for issues that had already been resolved (in the earlier patch releases).

Sometimes it isn't the patches, but the actual software itself which doesn't necessarily work as planned. The launch of a search engine with built in anti-phishing (trust) features encountered a hiccup when it was discovered that an obvious phishing site had been listed as a trustworthy site. The search engine, TrustWatch, is derived from the technology which drives the 'Ask Jeeves' search engines, and provides its evaluation of the trustworthiness of a site, alongside the normal search results. Netcraft, which operates a similar service, also stumbles when it provides the phishing site in question with a risk rating of 'one', which is one step removed from a trusted listing.

Incidents such as this highlight the trouble faced by automated tools which attempt to classify trusted and untrusted sites, and the false sense of security that they can provide to the end users. The major risk is that they will classify a site as legitimate which isn't, and this will lead to users entering sensitive data onto a phisher's site, having trusted the results from the tools. Where the liability lies will surely be resolved by the courts following the first set of major breaches. Unfortunately it also gets the phishers working harder to ensure that their sites match the legitimate sites more closely, which has the result of making it more difficult for the non-technical user to differentiate between the legitimate and illegitimate sites.

Staying with odd behaviour from software, and there are times when the users would almost give anything for a suitable patch to be released. Reports have been circulating that malicious software has begun to surface which attacks the Microsoft Jet Database Engine vulnerability which was initially reported on in the column of 8 August.

The Jet Database Engine is used as a component of Microsoft Access, and the flaw has been known since April this year, with public exploit code circulating since early August. Microsoft have yet to issue official acknowledgment of the issue, and the malware currently circulating apparently opens a backdoor on infected systems, allowing their complete compromise (or rapid assembly into a zombie spam network). It has also been claimed that this vulnerability affects all Windows systems from Windows 95, onwards (previous reporting only identified Windows 2000 and later).

Aside from the issues which appear to have plagued Microsoft in the last week, one of Microsoft's main competitors, Apple, has been in the news over public disagreement about iTunes Music Store pricing.

The week before the rumoured opening of the Australian iTunes Music Store, Apple's iTunes Music Store is beginning to publicly upset some Record companies, with Apple's CEO Steve Jobs recently digging in against efforts to increase the base cost of songs on the US version of the store. Already, the CEO of Warner Music has publicly complained about the price point that Apple has set with the iTMS.

Online forums have lit up with the virtual wailing and gnashing of teeth (appropriate royalties paid, of course), at the perceived greed of the Record companies. Standard complaints and gripes claim that the industry needs to accept new distribution methods as part of their existence, that music sharing existed long before the Internet, and that the music industry should not complain about the services that at least appear to be providing a reasonable (for the consumer) price point for purchasing music in an electronic format. Mixed in with those responses are ongoing claims that the Record companies are behaving like a cartel through their Industry Association, and that they are monopolistic.

Not afraid to weigh in to the argument, the Canadian Recording Industry (through Environics) has released an amazing report (that they sponsored), which claims that Canadians between the ages of 18 and 29 are more likely to shoplift, cheat at school / university and illegally copy music and software. It is suspected that the report was issued in an attempt to link the use of Peer to Peer (P2P) software, and criminal and unethical activities. Although correlation does not imply causation, now that the report is on public record, it suggests that the general populace will only recall the sound bite snippets from the reporting, and not the underlying agenda.

Unfortunately, this is not the only widely reported case involving the horrendous use of statistics to push an agenda in the last couple of weeks.

Further doubts have been cast on the accuracy of the recent Internet Security Threat report from Symantec, when it was discovered that an English town with a population of less than 40,000 people had the second highest number of infected personal computers globally, behind London, and ahead of likely suspects such as Seoul. The report claims that 5% of the total infected systems were located within the town, out of approximately 2 million infected globally.

That means that every person in the town has at least two computers (some even more), and every system is infected.

It is highly possible that the report is an aberration, that it indicates a localised spike in the sample data set, which has then been extrapolated to fit the population size. This is a risk which is faced when conducting experimental work, that applying set rules to a set of data points which are outside normal boundaries will return results which will fail basic logic tests (such as more computers than people and a 100% infection rate in an area that isn't a global high technology centre).

The outcomes from the Symantec report aren't all bad, however. Parallel reporting from the ACE European Group (an Insurance Group which has partnered with Cisco, Deutsche Bank, KPMG and IBM to sponsor a series of reports) suggests that more than 50% of European companies have suffered significant financial losses from Information Technology system failures in the past twelve months. While system downtime due to hardware failure may be the first thought when it comes to a system failure, the research suggested that weak security, electronic crime and malware were significant contributing factors to the financial losses, with a quarter of respondents indicating that electronic crime was responsible for their losses. The survey claimed that even with 'continual investment' in areas such as security and protection, that 'significant financial losses' were still taking place.

In passing news, companies behind the competing formats for the next generation of home media content have been at each other's throats in the last week. The successor to the DVD is going to be one of two proposed formats, the Blu-Ray, and the HD-DVD. With various high-powered companies in the respective camps, such as Sony and Matsushita for Blu-Ray, and Microsoft for HD-DVD, the misinformation has been flying thick and fast from each side about the opposing technology. Some of the key elements in the recent spat included the available space for content, and also the availability of the actual disks for consumer use.

Copyright © 2005, Sûnnet Beskerming Pty. Ltd.
Home | Contact Us