We have recently been implementing an attack on ZigBee communication. The ZigBee chip we have been using works pretty much like any other — it listens on a selected channel and when there is a packet being transmitted, the data is stored in internal buffer. When the whole packet is received, an interrupt is signalled and micro-controller can read out the whole packet at once.
What we needed was a bit more direct access to the MAC layer. The very first idea was to find another chip as we could not do anything at the level of abstraction described. On the second thought, we carefully read the datasheet and found out that there is an “unbuffered mode” for receiving, as well as transmitting data. There is a sentence that reads “Un-buffered mode should be used for evaluation / debugging purposes only”, but why not to give it a go.
It took a while (the datasheet does not really get the description right, there are basic factual mistakes, and the micro-controller was a bit slower to serve hardware interrupts than expected) but we managed to do what we wanted to do — get interesting data before the whole packet is transmitted.
This was not the first occasion when debug mode or debug information saved us from a defeat when implementing an attack. This made me think a bit.
This sort of approach exactly represents the original meaning of hacking and hackers. It seems that this sort of activity is slowly returning to universities as more and more people are implementing attacks to demonstrate their ideas. It is not so much popular (my impression) to implement complicated systems like role-based access control systems because real life shows that there will be “buffer overflows” allowing all the cleverness to be bypassed. Not many people are interested in doing research into software vulnerabilities either. On the other hand, more attacks on hardware (stealthy, subtle ones) are being devised and implemented.
The second issue is much more general. Is it the case that there will always be a way to get around the official (or intended) application interface? Surely, there are products that restrict access to, or remove, debugging options when the product is prepared for production — smart-cards are a typical example. But disabling debug features introduces very strong limitations. It is very hard or even impossible to check correct functionality of the product (hardware chip, piece of software) — something not really desirable when the product should be used as a component in larger systems. And definitely not desirable for hackers …
All chips at least have a test facility. The manufacturer has to sort working from broken chips; this must be done efficiently because time spent in an automated tester is surprisingly expensive (it adds noticeably to the cost especially of cheaper chips). There’s no obvious reason why test inputs couldn’t be disabled by an on-chip fuse, though.
There’s a “debug mode” on NetGear 834GT routers, which will enable utelnetd… Sky.com’s broadband ISP uses a slightly-hardened version, which can still be easily made to exhibit the flaw^H^H^H^H feature, see http://steve-parker.org/urandom/?y=2007&m=2#netgear
I told Sky about it two months ago; they do not seem to be interested that a simple phishing attack could open up anything on the router
Hello
In the particular example mentioned, the “debug mode” did not enable an attack, and its lack would not have prevented the attack. The only help it gives you is that it let you use the stock ZigBee chip rather than build your own. If the protocol is open, it doesn’t weaken the security that one particular chipset gives you access to information that it shouldn’t have – because you could have gotten the same information by building your own decoder. It only made the attack easier (hence cheaper).
I think the reason the chipset makers labeled it ‘debug/eval’ is because of performance issues that can be created by increasing the rate of interrupts generated by the chip.
— Arik
As a general rule, debug modes are information rich to enable faults etc to be found (often giving unintended extra functionality). Likewise security products have a history of being the oposit.
It is part of the security by obscurity ethos that has in the past been prevelant in engineering thinking when presented with designing secure systems.
As has been shown many times any obscurity process is not real security as details will always leak or be found (Think the MicroSoft X-Box, Sky-Cards etc).
The only valid reason for not having debug modes perminantly enabled is resource issues. Be it functional / speed / real estate / pin usage etc.
There have been many occasions where I have designed products with low cost IC’s where I have not used the chip in quite the way the manufacturer intended, this sideways design has in many cases saved a large cost and sometimes considerable PCB real estate.
Unfortunatly due to the nature of the way RF IC’s etc are going (towards SDR) the oportunity for doing this as a hardware hack is diminishing whilst makeing software hacks (via firmware) much more rewarding.
to Arik
what you say about making the attack easier is exactly the point I wanted to make. One does not have to search for the right chips and design own device, nor solder anything. Hacker just buys off-the-shelf product and spends some time reprogramming functionality. Many more people are able to do this.
This raises an interesting subject: Many software products have problem determination and debugging facilities hidden in them, which may be activated by “open sesame” incantations of some sort, and probably could be used for attackes.
Dan,
The (somewhat provocative) title of your excellent post is “Debug mode = hacking tool”. I understand what you’re trying to say, but please say it the way it is – “Debug mode simplifies implementing a cheap hacking tool”. Saying it like you did implies that that debug mode is somehow wrong or is a feature that shouldn’t exist, given current interpretation of the term “hacking tool”.
— Arik
Arik,
there is THE question mark in the title;-) Absolutely, debug mode is useful in many scenarios. I just got the impression that vendors should pay a bit more attention to how it is being used. I do not have any definitive answer. It is just my experience that it is much easier to perform the attack when you have got access to error code texts (when analyzing some source code) or non-standard modes of operation (like debug mode) when a special hardware is used in non-standard applications.
What I really do not know is how bad a debug mode is from the security point of view and how it should be allowed to be used.
Dan,
I think that concentrating on the device having more feature takes away from the real problem – that the protocol has a vulnerability. I know it’s a controversial subject, and I hold the opinion that the chip should let you do whatever you want. It is not up to the chip mfg to cover up for a design error.
If that rational holds, you might argue that having the ability to switch an Ethernet device to promiscuous mode should only be allowed on registered hardware devices because it can compromise the security of the network. While I agree that yes, a scarcity of promisc-capable Ethernet devices will probably prevent some people from sniffing sensitive information of the network, this is side-stepping the real problem – that there are clear-text credentials running on the network. While increasing the price of an attack, the attack is still very much possible. If you switch away from your clear-text protocol you cover both attack vectors and then some.
I hope the analogy is clear,
— Arik
Arik,
“f you switch away from your clear-text protocol you cover both attack vectors and then some.”
Ouch, there is nothing particularly wrong with clear text protocols, and a heck of a lot right with them. It is one of the reasons why they are so prevalent on the Internet.
The real problem is securing sensitive information and using appropriate security mechanisums where required.
For instance a clear text protocol that has “PSWD=XX…XX” is not insecure if XX…XX is secure and not reusable (think encrypted with a One Time Pad for simplicity of explination).
All a debug mode should do is assist in resolving protocol issues, therfor any error message should be appropriate for debuging (ie “Host not found”) and not for leaking information (ie “Password incorrect”).
If (and only if) the communications protocol is secure and both the host and client are authenticated with respect to each other should security related information be given by either side to the other.
The problems most designers have is trying to decide what is appropriate for protocol debuging and what actually leakes information, and at what stage it is appropriate to release information of any kind. Unfortunatly the usual result is either no information or all information in all contexts neither of which is particularly helpfull in practice.
The fly in the ointment as far as security is concerned is “unknown attacks” in that information that is assumed safe to give, might under a new attack (unknown at design time) leak usefull information to the attacker. Even this if you give it some though can be mitigated against to a very great extent.
Some methods of attack are believed to be dificult to mitigate agains such as Denial Of Service, under these conditions minimal information transfer and minimal load on the host are usualy considered the desirable design conditions. Therefore generaly the design choice is no debugging information or response as this would appear to minimise the atack potential. However again careful though shows that there are ways to minimise the attack potential and still provide debug information at a level that alows protocol issues to be resolved.
Poor design considerations are not a good reason to remove a useful diagnostic tool.