Sûnnet Beskerming Pty. Ltd. occasionally produces small reports that are for free (gratis) distribution. The free content may cover any area that Sûnnet Beskerming operates in. Examples may include generic security advice, specific security warnings, development practices, and application tuning. The only caveat on reuse of information from this site is in accordance with the following paragraph.
Use and reuse of information from this site requires written acknowledgement of the source for printed materials, and a hyperlink to the parent Sûnnet Beskerming page for online reproduction. Content from this page can not be reused in a commercial context without negotiating an appropriate licence with the site owner. Personal and educational use is granted without additional restriction beyond an amount in accordance with the principle of "fair use". Fair judgement is encouraged from site users as to what amounts to "fair use". Please contact us if you reuse our content, so that we may be able to provide more specific advice when necessary to improve your reproduction.
Sûnnet Beskerming do not normally offer services and products direct to the consumer, with this weekly column as the primary exception. One of the primary difficulties with a weekly column is ensuring that the content being reported remains fresh and relevant, even when it may be more than a week out of date at time of publishing. To remedy this situation, and to provide more timely information for people who desire up to the minute news, Sûnnet Beskerming is announcing the establishment of a mailing list which will provide up to the minute news on emerging threats, advice on good security practices, analysis and explanation of technical news items which may have an impact on your future IT purchases, and collation and distillation of multiple news sources to provide you with a brief, accurate, non-biased synopsis of technology trends, with a focus on security. Sûnnet Beskerming do not restrict the focus of their services to only one operating system or hardware platform, which allows you an equal level of service even if you do not run the leading Operating Systems.
Having as little as a few hours warning is enough to protect your systems against rapidly emerging threats. Some of the most prolific worms and viruses in existence can infect all vulnerable systems within a matter of hours, so every second counts. This is where having Sûnnet Beskerming services helps.
As a recent example, you would have been informed of the recent network compromise which resulted in up to 40 million credit card details being compromised a full 12 hours before it was being reported in the major Information Technology news sites, and more than four days before it was being reported in the mainstream media.
Sometimes we are even faster than Google, being able to deliver timely, accurate information before any related content appears in the Google search results.
Not many people can afford to be dedicated full time to searching and identifying this information, and so tend to find out once something bad has already happened to their systems. Let Sûnnet Beskerming use their resources to bring you this information before you find it out the hard way.
Sûnnet Beskerming are offering a free trial membership period for consumer subscribers to the mailing list (Businesses have their own, similar list, with added services). For subscription information, or more information, please send an email to email@example.com.
A Week to Stay Under the Blankets - 27 June 05
To support the implementation of the Data Protection Act (1998), the UK has established a small body which is designed to audit and prosecute companies with respect to their compliance with the Act. The Data Protection Act was initially established to ensure that customers of UK businesses could maintain a level of trust in terms of how their privacy related information was being handled. The new agency is called the Regulatory Action Division, which is a part of the Information Commissioner's Office. It is said that the reason why the body was established was due to the rapid increase in companies that held privacy related information.
Further news and reporting on the NISCC alert that was initially mentioned in last week's column has come to hand. It appears that the attacks have been underway for quite a number of months. There have been at least 1,000 attacks that are claimed to have been as a result of this activity, with more than 50 countries being affected. Because of the significant concerns that the attacks are the result of high powered organised crime interests, or even state sponsored, the NISCC released the advisory to warn others, and to confirm the rumours which had been spreading of the attacks, and as confirmation of an online crime wave that has intensified recently, with more groups participating in similar attacks.
The mechanism of the attacks apparently relies on the information that other worms and viruses leak once they have infected a system. This information is then used to create an approach that is customised to the intended target. Information such as internal workplace structures and personnel positions can be used to make the targeted infections look more like legitimate work mail, and is a common practice from people engaged in social engineering cons. What makes this approach different is the apparent central coordination and directed attacks from a determined criminal element, and the willingness to chase after military and government systems that are usually avoided due to the expected greater scrutiny and security of the systems (and pursuit of breaches).
In information released by the NISCC, they believe that there are no more than 12 people involved with the primary attacks that they have been tracking, and these people are able to turn an attack around (move to another target) within two hours. The NISCC further goes on to detail that the attacks started in March 2004, and have been continuing at the rate of 10 to 20 per week since then. MessageLabs, which has been helping with the investigation and analysis, advise that the infecting trojan applications are being changed with each attack to try and slip past system monitoring tools. Other sources indicated that the attacks are not as focussed as initially reported.
It also appears that the companies being targeted are those that have previously been infected, and thus leaked the information being used in the current attacks, leading some observers to consider these attacks as a second round of related infections. For investigators, the lack of an apparent short term financial return raises doubts that the attacks are being coordinated by criminal concerns. The NISCC stated that one of the reasons why they released the advisory was to discover the scope of the attacks within other organisations, and whether they extend to infrastructure other than that which the NISCC is responsible for monitoring. A Home Office spokesperson was on record stating that the NISCC, in conjunction with local agencies, was close to turning off the source of the attacks. For observers of these trends, these attacks are not a surprise, and many have been seing similar attacks for quite a while.
A recent virus infection in Japan has seen confidential information relating to Japanese nuclear power plants leaked to the virus authors. Almost 40 megabytes of information that dealt with safety inspections that were carried out on numerous power plants were leaked. Apparently, the leak was the result of a virus, referred to as the 'disclosure virus', which infected installations of the Winny Peer to Peer application, a very popular Japanese Peer to Peer application. An employee of an affiliate of Mitsubishi Electric that was responsible for conducting nuclear plant inspections was identified as the source of the breach. As allowed by the company policy, the employee was using his personal computer for work purposes, and it was some of this data which was exposed. Plants that were affected included nuclear power stations at Tomari, Sendai, Tsuruga, and Mihama, and the information that was lost included imagery of plants during inspection, team names, and locations where personnel were staying during the inspections, and copies of reports on regular inspections.
Vulnerabilities associated with the Winny Peer to Peer application have also been responsible for other Japanese information breaches:
- Investigation details from Hokkaido and Kyoto Prefectural Police Departments were leaked in March 2004.
- Confidential Self-Defence Force information, including training reports and rosters, was leaked in April 2004.
- Medical records from a hospital in Tottori Prefecture were leaked.
Plans announced by the US Military to maintain records of all high school and college students have raised concerns about the safety of the databases that will be developed to hold the records. As reported, the databases are going to be a collation of commercial data, and information already available to the US DOD, and will be managed to highlight students who meet the required standards for military service. This gives military recruiters an impressive edge, being able to target students and being backed up with an incredible supply of highly specific personal information. For students it does not appear that there is any opportunity for avoiding the databases, even those who opt out will have their information maintained in separate databases which will be checked against the primary databases to ensure no pollution of the live data. Even with the higher standards of data integrity that the military specifies, there is an enormous risk for compromise of data and abuse of the contained information, especially as it places the data in a single framework. Privacy advocates are enraged at the planned system, with some believing that it effectively allows the US Government to bypass laws that restrict it from collating data on US citizens, by handing off to commercial firms. The US military already has a level of access to the US schooling system as a result of the No Child Left Behind Act of 2002, which can restrict the amount of federal funding available to a school if it restricts access to certain personal information.
Perth based company, Clarity 1, is the first company to be prosecuted under the Australian Spam Act which was implemented in April 2004. The company managing director claims that his business operations are legal, and is willing to defend his company's operations in court, although there are a lot of people who regard him to be quite a significant spammer. Although Clarity 1 is not the first company contacted by the Australian Communications Authority, it is the first that is to be prosecuted, and was accused of sending 56 million spam messages since April 2004. This is good news, but don't expect your incoming spam levels to drop off at all. In the survival of the fittest that the Internet encourages, this is just removing the weakest of the spammers - those that establish themselves in countries with anti-spam laws.
News broke on Thursday of an Indian call centre employee that was caught selling details of customers from various UK banking institutions. The UK tabloid, The Sun, sent an undercover reporter to purchase personal details, credit card numbers and logon details for UK banking customers. A total of 1,000 individuals had their details sold for £4250 GBP. The call centre employee indicated that he could sell up to 200,000 account details per month. The sale of this information is likely to be in breach of the UK Data Protection Act, 1998, which was enacted to protect the information of UK citizens that is held by companies and government agencies.
Sticking with identity theft related news, and reports surfaced towards the end of last week detailing an under-reported side of identity theft cases. It appears that filing false unemployment claims with stolen identity data is more profitable than pursuing credit card companies, with the use of as little as 100 accounts over 26 weeks being able to obtain over $1 million US dollars. The drawback to this approach is the requirement to collect these payments in person in many instances, which limits the maximum number of fraudulent accounts, and increases the risk of exposure. In addition to being an added burden on Government coffers, in some cases, where there are fixed funds available for distribution, it may actually empty them ahead of legitimate claimants or make it more difficult for legitimate claimants to make claims. The other downside to this is that it actually may not be happening. There has been no further reporting to support the claims in the article, there are no specific sources beyond unnamed people, and the author of the linked piece appears to have a vested interest in placing this concern in front of people, as they represent a firm involved with payroll software.
It was announced recently that Microsoft's Webmail service, Hotmail, would soon (from November) be refusing to deliver any mail that did not have a valid Sender-ID. A Sender-ID is an extra check applied to an email message which ensures that the domain (e.g. skiifwrald.com) that an email was sent from matches that which is claimed to be in the actual email message itself. While this added check will not stop very much spam, it will make it more difficult for phishers to send out emails claiming to be from firstname.lastname@example.org, or some other financial institution, and it will effectively stop 'joe-jobs' from being effective against hotmail.com addresses. A 'joe-job' is where somebody sends out email claiming to be someone else, including modifying the Reply-To and From email headers to identify as the person they are claiming to be. There are concerns that this extra step will actually make it more difficult to communicate with people who utilise hotmail.com email addresses, frustrating the legitimate user but not the spammer. This means that if your ISP, or mail sending domain, does not add a Sender-ID to outgoing messages, your messages sent to hotmail.com accounts will not arrive. Even though hotmail.com accounts have a 'trusted sender' type of capability, the filtering of messages will apparently take place prior to the checking of trusted sender status, which means that even if you have added someone as a trusted sender, their messages may still be deleted prior to delivery. It is also indicated that the use of message forwarding results in an incorrect Sender-ID to be applied to the email message.
Sender-ID technology is Microsoft developed, and has just recently gained approval, along with Sender Policy Framework (SPF) from the IETF to enter the Experimental phase of the long road to being an Internet standard (which it currently is not on track to be), but some bodies are refusing to deal with it due to onerous licencing restrictions. For those who did not follow the previous link, it basically outlines that the Apache Software Foundation will refuse to support the Sender-ID through any of their products, which include the web's most popular webserver (Apache), Struts, Jakarta, Ant and a number of other key Internet technologies. Both Sender-ID and SPF will help mitigate impersonation as far as the From and Reply To parts of email are concerned, but their actual implementation has caused significant problems for a number of users who have followed the specifications completely, such as undeliverable mail.
Following on from the earlier reported incident where the existence of encryption software on a system could possibly be used to help establish criminal intent, a case has surfaced in the US where the viewing of Internet content may be classed as possession in certain circumstances. In this particular case a suspect was caught with illegal imagery in his browser cache, and the prosecution is trying to get this taken into account as equivalent to possession of the illegal imagery.
A scrutiny of software from various security software providers found that, combined, they had more reported serious vulnerabilities than Microsoft Windows, for a given time period. The reporting was designed to highlight that applications being used for protection of systems and data are not infallible, and can cause or contribute to major problems with systems (such as incorrect virus definitions files, or firewalls targeted to spread worms).
In a recent analysis published by consulting firm Gartner, they claim that users are reducing their use of online commerce due to concerns over online security. The reported figures indicated that more than a quarter of those surveyed were reducing their online banking levels, and of those concerned with their online banking, four percent have stopped banking online completely, and 14 percent have stopped paying bills online. One of the Gartner analysts is even calling this year a watershed year for commerce, security and the Internet.
If you did not notice that the introductory message has changed, please revise it as it contains details of a change of service being offered by Sûnnet Beskerming with respect to material related to this weekly column.
Increasing Public Awareness - 20 June 2005
People are starting to get the message that there are risks involved with using public Internet connection terminals, such as those at Internet kiosks found at airports, Internet cafes, and public terminals at education institutions. Recently, a member of the Australian Federal Police's Cyber Crime Unit went on record saying that they were starting to observe more cases of keylogging software being installed in these locations. The basic rule of thumb that should be followed by anyone who is looking at using a public Internet terminal, is to not access any online finance sites, nor to use any sites that require a login / password that you don't want to have compromised. For some people, their webmail password may not be all that important, however, their banking account details are. Unfortunately, some people use the same login / password combination (or same password at least) across multiple sites and online accounts, thus increasing the potential risk that they face. Basically, if you are accessing the Internet from a computer that is not your own, then you should consider that there may be something on the system which could be capturing all of your input. Even your own system is not safe, as many spyware / adware applications now install keyloggers.
A briefing recently released by the UK NISCC appears to have caused a ripple amongst other reporting bodies, with a number rushing to release their own derivative from the NISCC original. The advisory, 08/2005, relates to the growing trend of targeted trojans. What this means is that there are more trojan horse software attacks being specifically constructed to go after a company / government agency / industry group membership. What initially seems strange about the briefing is that there does not appear to have been a specific trigger which prompted the release of the report, i.e. there were no major attacks in the last couple of weeks that were announced. However, there have been continued attacks which have formed a trend, and it is the analysis of this trend which has prompted the release of this information. The initial reporting is suggesting that these attacks are arriving from China, North Korea and Russia, and the sophistication represents either state sponsored actions, or high powered organised crime.
Next week's column will cover in depth the issues surrounding this development, but the continued advice which will defeat this attack is to NEVER open attachments in emails that you are not expecting. If this simple action is followed, then there is little chance of the trojan affecting you (unless other people in your organisation open attachments readily, and thus get infected).
Just recently, it was announced that there have been almost 60 reported security breaches of personal financial information held by US corporations, affecting 13.5 million US citizens, to date in 2005. This figure probably shouldn't shock regular readers of this column, but it does present in a single sentence exactly how many people get affected by this issue. This figure doesn't account for other cases of personal privacy information that has been leaked by companies, which is also significant. The mechanisms for financial information loss include compromise of computer systems, loss of backup tapes, and employee theft. For Americans this is only the tip of the iceberg, as more and more personal privacy, medical and financial data is being sent overseas for processing and storage. In a case from 2003, transcripts of medical records of patients from the UCSF's medical centre were held for ransom by an unpaid Pakistani transcriber, who had been sub-sub-contracted the work from an American outsourcer who had themselves been sub-contracted to the original contractor. The transcriber was threatening to release the contents onto the Internet if they were not paid. This case took place even though the HIPAA Act was in place, supposedly to prevent cases like this. The actual content of that Act does not prevent this from happening, however. It is designed to prevent the sale of information to third parties, but not to prevent the offshoring of information for processing.
Even now, Americans do not have to be notified whenever their information is being sent overseas for handling, and the list of corporations and agencies making use of this is long and scary, including:
- Major financial institutions
- Various State and Federal government agencies
- Accounting firms sending financial data overseas for tax return preparation
- Hospitals that are sending radiology data to India for diagnosis without notifying patients.
This issue is not restricted to Americans with many Western companies engaged in outsourcing various components of their business activities which relate to their customers' privacy data.
As an addendum to the above, in the period between writing and publishing this column further breaches were identified with Equifax Canada (a credit processing firm), and a number of credit card providers. The Equifax breach apparently was the result of misuse of account login details. In fairly pleasing news, only 600 records were accessed, and the affected people have all been given access to a credit alert system to help them identify whether their credit record is being accessed without their permission. The credit card provider breach is more serious, with possibly more than 40 million credit card details from multiple credit card providers exposed. Of that, almost 14 million were MasterCard branded cards. MasterCard claim to have discovered the breach through the use of fraud detecting tools, which identified a third party processing firm, CardSystems Solutions, as the point of exposure. Apparently the breach was the result of a network compromise at CardSystems Solutions, and the firm has been given a limited amount of time by MasterCard to come into line with their security requirements. This single reported breach quintuples the number of people exposed to financial record theft this year, with some unfortunate people who will actually have had their financial record data exposed through more than one of the breaches.
There are many different types of threats to websites and web applications, with continual research being made into new attack techniques and vectors. A recent announcement of a new attack mechanism could have major implications for the way that the Internet, as a whole, works. The newly described attack mechanism is called HTTP Request Smuggling. Before being able to understand what this attack is, it is necessary to understand what an HTTP request actually is.
Network communications are like sending letters in the mail. Let's say that you want to send a multi-volume Encyclopedia from where you are, to your friend in another city. It is possible to send all of the volumes at once, in a massive package, but there are risks. If the package gets lost, then neither you, or your friend, have access to any of the information in the encyclopedia. Large packages also take a long time for delivery, and attract significant shipping costs. A safer solution is to send one or two volumes at a time, allowing them to move through the system faster, and costing less for shipping. If one or two packages get lost, it doesn't mean the loss of the whole encyclopedia, allowing your friend to at least get some information from the rest of it. This is what some protocols use - sending of packages of information (packets) without waiting for confirmation of delivery. This allows them to have rapid delivery of packets, but risks loss of information which can not be retrieved if a packet does not arrive. It can also be sent to multiple addresses at once (a benefit for online gaming). With other protocols, if a packet that was expected goes missing the recipient sends a request to have it retransmitted. For example, your friend doesn't receive volume G of your encyclopedia within two weeks of you sending it, so they send you a message to send another copy of volume G. This slows down the information transfer, as the recipient is expecting packets to arrive in order, and sends queries if packets are missing. As the packets pass through intermediate network points on the journey to your system, the intermediate systems can add the equivalent of postmarks to identify that the package passed through them, and where it is headed next. Sometimes they even make local copies of the information (caching) in order to reduce the time required for retrieval of information - this is why the HTTPS protocol is slower than standard HTTP, as it explicitly disallows caching of content - forcing fresh copies direct from the server for every request.
In normal use, an HTTP request is a single packet of information. What happens with HTTP request smuggling is a second HTTP request is being squashed into the available space of the main request, with this strange double request arriving as a single packet. Using the mail analogy, it is like you sending a volume of your encyclopedia to your friend, but the package also contains a magazine for them that is illegal in their city. This allows the package to get past people who are monitoring the mail, looking for content like the magazine which is not permitted. Similarly, the HTTP request being smuggled is an attempt at getting a request through which would otherwise be stopped by tools such as Web Application Firewalls (proxy based firewalls that are making a resurgence due to the implicit bad security of many online applications). In some respects this is similar to the techniques that some email-borne worms use to get past anti-virus software, by naming their extensions .exe.scr, in an attempt to be identified as a screensaver (.scr), and not the executable file (.exe) which they actually are. The HTTP smuggled request provides the opportunity for a lot of attacks that have been prevented to suddenly become functional again.
The correct way for devices to handle different HTTP packet formats is outlined in RFC 2616, and it is the deviation from these suggestions which is causing the problem with HTTP request smuggling. In one particular example, filling a request with more than a certain number of bytes forces IIS 5 (Microsoft's Internet Server) to process the request as two requests. For it to be successful, the attack requires a second, paired device which processes HTTP requests as well, and exploits the difference in how requests are handled between the two devices. The second device may be a caching device, such as an ISP may use to create local copies of popular web content to save on their outbound bandwidth. In the above example, where IIS is treating an oversized request as two distinct requests, then it can cause the cache server to cache the wrong page for a particular address. This is because the cache server thinks it has sent through one request, while IIS has seen two requests. As a result, the second page that IIS sends back will then be associated with the next request from the cache server. Thus, it is possible that what appears to be http://www.safebank.com in the cache, may actually be http://www.evilbank.com
More Security Problems, and the Apple / Intel Move - 13 June 2005
This week is likely to be a shorter column than the last couple of weeks' have been. Leading the news this week is the switch from IBM to Intel for Apple systems. This switch was announced at the WorldWide Developers Conference (WWDC) held recently in San Francisco, and hinted at in last week's column. Initially, Apple will continue to support and produce PPC systems, but, starting in 2006, Intel based Macintosh machines will be released to the market. One of the big surprises from the announcement was confirmation of a rumour that Apple had maintained dual versions of their current Operating System, OS X, on PPC and Intel based hardware for the last five years. This dual version was rumoured to be called Marklar, and indicates quite an impressive ability to maintain corporate secrets for five years. This admission, as well as the whole Keynote presentation being run on a Pentium IV 3.6 GHz machine seems to indicate that the platform migration may not be as difficult as previous architecture moves that Apple has done. The Keynote also demonstrated PPC native (i.e. current OS X) applications running smoothly on the Intel machine, showing that the transition will not cause the loss of a lot of functionality for users. The movement for developers is also expeceted to be relatively smooth, with Wolfram, the developers of Mathematica, able to migrate Mathematica 5 from the OS X PPC version to the OS X Intel version with only 20 changed lines of code.
It seems like not a week can go by without another report of customer privacy data being lost or stolen. In the most recently reported case, Citigroup, through UPS, lost a package containing backup tapes with records on 3.9 million customers. The information included names, social security numbers, account details, account history and loan information for present and past retail customers. A simple technical step which would have protected the data somewhat would have been to encrypt the backup tapes at time of archiving. In this case, the tapes were unencrypted, allowing anybody with the appropriate equipment to extract the information simply. Unfortunately, the problem of identity theft (and subsequent financial fraud) only appears to concern technically minded people, and the wider population is unaware of the risks that they face through compromise of data like this. Solutions do exist, but companies seem to not be aware that they exist. There are two possible explanations for the number of breaches being reported:
- These issues could have always been happening, and it is only now, due to a number of privacy related laws, that we are starting to hear about them.
- Or, more worryingly, it is due to the rapid computerisation of business processes that has seen weaknesses creep into data storage and management systems, in particular the ease with which fraud can now be perpetrated.
Though it was reported through a couple of smaller channels in the previous couple of weeks, news of a firm directly involved with payment to infect systems with adware has come to the attention of Information Week. One of the more interesting tidbits to come out of the reporting is the cost / return ratio. It is claimed that as much as $75.000 USD per annum could be collected from machines that cost $12,000 USD to infect. The timeframe for the infection was only one month. At a going rate of 6 US cents per compromised machine, this indicates that 200,000 machines were hit in that particular run of infections (2.4 million per annum). A compromise machine in the United States, or other English speaking country, apparently attracts a higher premium, than a comparable machine in a non-English speaking country, so the actual number of infected machines may actually be higher. The bad news for people who want to prosecute the company involved, iFrameDollars, is that they are located in Russia, effectively out of reach of many of the people infected, and agencies that would be after them. The apparent public nature of the conduct of this company may even indicate protection or other involvement with organised crime interested. It has been suggested, hinted at, and inferred that organised crime was taking a greater interest in the seamier side of the Internet, and now this report is actually detailing the financial cost and benefit that these operations can provide.
The removal of this company will not solve the problem of paid-for spyware and adware. In the true capitalistic / free market economic model that the Internet seems to support, other companies will spring to fill it's place, if they are not already establishing themselves within their local markets. The issue of liability has seen a number of observers to declare that Microsoft should be held liable for the weaknesses in their products which allow these companies to actually make money. The problem with this idea is that it would be essentially an impossibility to have happen. At least the issue of paid-for spyware and adware is becoming a more important matter for Internet users, and interested agencies,. This helps further raise awareness of the need for security for the end user, and highlights the concerns that various technical companies and people have been identifying for some time. The added focus may actually allow these protective technical companies to actually develop and implement better safeguards and protection.
The future Windows Longhorn release from Microsoft is supposed to secure a lot of the holes that spyware and malware tend to exploit, however, following the recent announcement of the future Windows Command Line Interface (CLI), Microsoft then came out and said that it would not be appearing in the initial release of Longhorn. Again, some observers have raised concerns as to the actual effectiveness and security that Longhorn will be able to deliver, suggesting that it actually may turn out to be more like an updated Windows XP than the groundbreaking Operating System that it has been touted to be.
Bluetooth Insecurity and Apple / Intel Rumours - 6 June 2005
The Bluetooth protocol may not be as secure as once thought. In a paper that is due to be released in the next few days, a number of cryptographic researchers believe that they have discovered a method which allows them to hijack the communications between two Bluetooth enabled devices (such as a Bluetooth headset and phone, or Bluetooth keyboard and base) even when the security features are enabled. Previous attacks against the secured Bluetooth mode required interception of the first data packet between the devices, while the new attack method can be successfully used at any time.
There is some conjecture as to the mechanism which allows for this attack to take place, given that the research hasn't been fully published. What appears to be the mechanism of the attack is an attacker introducing a third device to the network, which then pretends that it is one of the original devices. This additional device then sends out a command stating that it has lost the link key, which is required to interpret the information flowing between the paired devices. This then should force a re-pairing of the Bluetooth devices to establish a solid communications link again, using a PIN to generate the link key. At this stage, a user may be prompted to re-enter their PIN to re-establish the link.
This is the point where the attack can be defeated.
By refusing to re-enter the PIN, the third device can not crack the connection. Unfortunately, this results in a loss of secure mode Bluetooth usage, essentially allowing the attacker success through denial of service. The PIN crack process can take as little as 0.06 seconds for a 4 digit PIN, so the recommendation is to use longer PINs which will take longer to crack, but will also be eventually defeated. The actual key size is 16 bytes, which need not be limited to alphanumeric characters, providing a means to extend the time required to crack the key. What can be done to avoid this attack is to use all 16 bytes of the available key code, force the devices to notify the user of the need to re-pair the devices, and allow the use of non-alphanumeric keys (if possible with the devices).
In a move which would satisfy conspiracy theorists everywhere, along with Wintel diehards who refuse to purchase an Apple product, C|Net published an article late on Friday, US Pacific Time, claiming that Apple will be ditching IBM as their chip supplier to move towards Intel as their chip supplier. Rumours over the last few months have hinted at secret negotiations between Apple and Intel, but the wider consensus is that a move from IBM to Intel would be a death knell for Apple. The report from C|Net hints at Steve Jobs unveiling this information at the upcoming WorldWide Developers' Conference (WWDC), being held in San Francisco over the next week. The C|Net article appears to have a single, unnamed source, and is the only reporting of this topic available. Reports from other agencies are derived from this original report.
Initial analysis suggests that it is an inaccurate report, and may be being used for some nefarious purpose, such as a means to artificially depress Apple's stock price (AAPL) prior to, and just after, Steve Job's Keynote address at the WWDC (10 am US Pacific Time, Monday). The timing of the article seems especially suspect, being positioned after the close of business for the continental United States, and the longest timeframe from the WWDC Keynote without the scope for other reporting agencies to investigate. Having said that, what is quite likely is that Apple is going to be using Intel chipsets as part of their solutions, but not as the primary CPU. Intel develop more than the x86 line of CPU chips, and in all likelihood, it is one of these products that Apple may use in their systems. For example, some of the Airport range of base stations from Apple use AMD chipsets, and it is rumoured that some of the XServe line have Intel chips inside (but not as the CPU). It is also a possibility that Apple will licence Intel to produce PPC chips based on the difficulties in supply from IBM. IBM, Intel and Apple refused to comment to C|Net which could indicate the regard that they hold the rumour in (or, it could just be a tacit admission of the veracity of the information). Technology forums and Apple rumour sites lit up following the news, and it is likely to continue through to the end of the Keynote address.
Historically, the WWDC Keynote has been used to introduce new, ground breaking, products, such as the G5 line of CPUs based on the PPC 970 chip line from IBM which is based on their POWER technology.Other rumours are suggesting that the big announcement will be dual-core PPC 970 chips from IBM to power the professional line of Apple products (i.e. models with the prefix 'power' - PowerBook, Power Macintosh).
In a strange occurrence last week, the New York Stock Exchange closed four minutes early on the first of June. Traders were recalled to the trading floor long after the technical close time of the exchange to possibly complete the remaining four minutes of trading while the source of the early close was being sought out. According to a Reuters report, the disruption was caused by an error message which flooded the primary and backup systems with millions of copies of itself, overloading the systems and causing a localised denial of service. For a system which had been touted as being extremely reliable and tolerant of multiple failure points across it, failure due to an uncontrolled error message is almost ironic. The exchange opened at the normal time the next morning, but the loss of trading time could have significant financial cost, not only for the exchange, but also for the trading firms, as many orders timed for the close of business would not have been filled.
Interestingly enough, a Tom Clancy novel explored the possible outcomes from a disruption to the electronic systems that controlled major stock exchanges (it was a cornerstone of the plot in "Debt of Honour"), although the outcomes from this disruption have been nothing like what was described in the novel.
Although reported more than a week ago, the news that the latest version of the Netscape Internet browser (Version 8) broke Microsoft Internet Explorer support for various XML products has led some posters in various Internet forums to opine that it is only a fair turnabout for Microsoft, given their history with intentionally disabling previous versions of the Netscape browser. Other anticompetitive practices of Microsoft have been under scrutiny recently in the European Union, where issues were raised with respect to the Media Player integration, and how Real and other media format providers were not afforded the same access for inclusion with the default Windows installation. Microsoft were given time to comply with the ruling to provide a reduced media edition of Windows (the default Windows installation minus Windows Media Player and some other components), along with some other requirements, and they were threatened with fines if they did not comply. Microsoft has thrown legal attempts at getting an extension or overturn of the ruling, and a decision regarding the imposition of fines should be due soon.
Sticking with Microsoft news, the final ever mainstream update for Windows 2000 will be released shortly (security fixes will continue until 2010). One of the significant applications that will not be upgraded is Internet Explorer. The current version (IE 6) will remain as the final supported version for Windows 2000, with the forthcoming IE 7 not being made available for it. This means that customers who will be remaining with Windows 2000 will not get the additional benefits from the newer version of Internet Explorer, such as tabbed browsing. To gain this and other functionality, customers will need to use a third party browser such as Opera or Firefox. Windows 2000 was never released to the consumer market, despite many users who believe that Windows 2000 was the best Operating System ever released by Microsoft, so the cessation of support should not affect too many home users.
One of the important considerations that come from not supporting IE 7 is that web application developers may need to ensure backwards compatibility to IE 6 from their software. While IE 6 is a relatively new application, the standards support is lacking in several key areas, such as CSS 2 and 3 compliance. While all software gets EOL'ed at some stage, some of the more cynical observers are declaring that the reason why Windows 2000 is being EOL'ed now is that it is a threat to the uptake of the future Windows Longhorn release, as, in their opinion, Windows 2003 and Windows XP are not up to the quality of Windows 2000 as server Operating Systems.
Data loss from major banks continues, with Investment Bank UBS recently announcing that they have lost one of their hard disks from their Tokyo office. The disk went missing while upgrades were being made at Hong Kong and Tokyo. Although it was marked for destruction, the loss of the disk has caused some consternation. The carrier that held the disk has been located, but not the disk itself. It is not known what the disk held - it is feared that it may contain trading histories for corporate clients of UBS.
Though not a story from the past week, the patent system in the United States has been under fire from technical researchers and companies for allowing predatory patents which are obvious and not unique to be patented, not that it stops them from applying for patents themselves. With the Free Trade Agreement with Australia, the Australian patent system is expected to be brought more in to line with the US model. The EU is currently debating whether to allow software / business process patents in their patent system, and the impending defeat of the EU constitution may delay any uptake.
One of the key issues revolves around the concept of a software or business process patent. Technically, these patents are for ideas, not devices, which many argue is contradictory to the actual definition of a patent. Software patents are even more under fire, with patents being granted for 'inventions' that have obvious prior art, or for concepts so technically simple that they are considered obvious to anyone skilled in the art. These two tests should fail a patent from being granted according to the definition of a patent, but many patents are still being issued, irrespective.
Although many large corporates, such as Microsoft and IBM, are patenting everything they can, industry leaders have gone on record stating how the current Intellectual Property situation actually stifles innovation, and is forming a barrier to entry for smaller companies and independent researchers.
- If people had understood how patents would be granted when most of today's ideas were invented and had taken out patents, the industry would be at a complete stand-still today. - Bill Gates (1991)
- Some will say that such rights are needed in order to give artists and inventors the financial incentive to create. But most of the great innovators in history operated without benefit of copyright laws. - Roderick T. Long
- Who owns my polio vaccine? The people! Could you patent the sun? - Jonas Salk (1914-1995)
At a recent security conference held by AusCERT on the Gold Coast, the founder of Kapersky Labs, Eugene Kapersky spoke of the changing trends in virus and worm writing and propagation. The trend that he described was that virus and other malware authors are moving towards creating, managing and selling botnets. This viewpoint echoes others in the security community, including Mikko Hyppönnen from F-Secure, a Finnish based security company. Once a computer is compromised by malware, be it a virus, worm or trojan, it is possible that the computer is able to respond to external commands from the person(s) responsible for the malware propagation. If this happens, the system is regarded as a bot or a zombie, where the computer responds to commands and performs actions at the discretion of someone other than a local user. Co-ordination of multiple computers under one controller forms a botnet. A common botnet may range from 5,000 to 10,000 unique compromised systems, and they are commonly used to further infect other systems, or as spam delivering networks.
The difficulty for companies trying to fight this trend is that the software used to compromise the systems may only be used on a one off basis, and the 5,000 to 10,000 compromised systems may not appear to have any discernible slowdown or damage to the local users. With such a small rate of infection from all the millions of computer systems connected to the Internet worldwide, the infections can get lost in the background noise, which is one of the aims - to keep them out of sight of the companies fighting them.
To counter this issue, anti-virus and anti-spyware companies utilise a range of passive and active searching measures to try and catch as many infection vectors as possible. Microsoft has recently joined in this approach, looking to discover exploits in their systems that may be circulating on the Internet.
After the above information was drafted, Computer Associates (CA) came out with a claim that a large coordinated attack on the internet is imminent, through the use of botnets. CA researchers believe that the botnet is being established through three different trojans, and access to the botnet is being sold for as little as 5 US cents per compromised system:
- Glieder-AK - This is the initial trojan that opens backdoors into infected systems which then allows the remaining two trojans to access and install themselves. Several variants to this trojan were released on June 1, and appears to have been released as a lightweight rapid infector, designed to infect as many systems as possible in the shortest timeframe.
- Fantibag - This secondary trojan prevents infected systems from communicating with anti-virus update sites and update.microsoft.com in an attempt to isolate infected systems from an ability to repair themselves.
- Mitglieder - This final trojan provides a backdoor into the system for the remote hackers to control the system.
In other news from the past week, alternative internet browser company, Opera, recently carried out a poll where they found that around half of all internet users felt that the browser choice was an important factor in online security, but only 11% had switched browsers as a direct result of seeking more security. Disturbingly, of the internet users polled, one third admitted that they didn't know whether browser choice made a difference whether a computer was more susceptible to infection from malicious software, and a further 17 percent did not believe that it had any effect whatsoever.
A new Top Level Domain (TLD) has been recently announced, .xxx - ostensibly for adult content sites. This is useful
for people who manage internet filters (such as NetNanny), but it is debatable as to what the effectiveness of the new
TLD is going to be. The cost of registering domain names on this TLD is ten times that for a .com or .net domain, and
it is expected that companies will not shut down their .com addresses in order to move across to the .xxx addresses.
A domain name merely points to an address, so multiple domain names can point to the same IP address, and thus the same
site. The initial reason for different TLDs was to share the load of DNS requests on root servers whenever a local
server would not hold a cache of the address. Initially, .com was for commercial sites, .net for network providers
and related companies, .org for non-profit organisations, and so on. With the growth of the internet, this model broke
down, and the importance of specific TLDs declined, with .com becoming the ubiquitous TLD. A practical example of
this is BigPond, Telstra's Australian ISP. Technically, their website should appear at the following address:
Instead, their site appears at:
http://www.bigpond.com.au redirects to http://www.bigpond.com
According to German technical magazine, C'T, and reported in The Register, it may be possible to upgrade XP Home to essentially an XP Pro Lite, which includes the capability for Remote Desktop, User management, and improved security features. The specific details are reported in the print version of the magazine (German only), and are replicated on The Register's website. Essentially, it is the simple change of a couple of registry values and setup files. While it may seem strange that it is so easy to obtain the Pro version features so simply from the Consumer version, it is actually quite a common practice for product manufacturers (and software developers) to develop one version, but switch off various capabilities through configuration setups for multiple sales points. This allows them to streamline their development / production chains. This is also the same thing that forms the basis for most basic overclocking procedures, such as flashing DVD players to be region free. In a specific case, USRobotics had released two modems which were exactly the same hardware - the Sportster (low end) and the Courier (high end). The difference between the modems was a simple initialisation string which, when leaked, allowed consumers who owned the low end modem to upgrade for free. With the XP Home to XP Pro Lite modification, support for SP2 is lost unless it gets streamlined with the installation CD. This code may actually have been left over from the NT 4 codebase (2000, XP, 2003 are all derived from the NT codebase, while 98, 98 SE, ME are derived from the 9x codebase), where a similar hack allowed NT 4 Workstation to be changed into the Server version.
Sony has announced that they are releasing a new protection technology to prevent multiple copies being made from their CD-R disks. Another disk protection device was defeated through the use of a simple black marker pen to cover a single track on the disk. The use of a marker pen, or holding down of the shift key to avoid the autorun protection on other disks, could be considered illegal under the United States Digital Millennium Copyright Act (DMCA), as it is a means of circumventing access control placed by the vendor.
Finally, mainstream media appear to be picking up on the idea that once information is computerised - in particular placed on the internet, then it may always exist, especially if it attracts popular interest. A recent column by CNN discussed the long term storage of information by Google, and how it may cause issues for some customers and become a source of potential abuse due to privacy regulations that change the accessibility of information after certain periods of time. Even though Google's corporate mantra is 'Do No Evil', they have basically become the tacit gatekeepers of the world's electronic information. Indeed, Google is the hacker's best friend. It is trivial to access information that should be protected, and Google has innocently indexed and archived copies of information that should never have been exposed to public view in the first place - this is the problem of the administrator, not Google. Perhaps the simplest approach is to consider any information placed on the internet to be unprotected, and could be read by anybody, good or evil. If it could possibly be used to embarrass you, then perhaps it shouldn't be placed on the internet.