Observations from two weeks of SSH brute force attacks

Earlier this month, I blogged about monitoring password-guessing attacks on a server, via a patched OpenSSH. This experiment has now been running for just over two weeks, and there are some interesting results. I’ve been tweeting these since the start.

As expected, the vast majority of password-guessing attempts are quite dull, and fall into one of two categories. Firstly there are attempts with a large number of ‘poor’ passwords (e.g. “password”, “1234”, etc…) against a small number of accounts which are very likely to exist (almost always “root”, but sometimes others such as “bin”).

Secondly, there were attempts on a large number of accounts which might plausibly exist (e.g. common first names and software packages such as ‘oracle’). For these, there were a very small number of password attempts, normally only trying the username as password. Well established good practices such as choosing a reasonably strong password and denying password-based log-in to the root account will be effective against both categories of attacks. Surprisingly, there were few attempts which were obviously default passwords from software packages (but they perhaps were hidden in the attempts where username equalled password). However, one attempt was username: “rfmngr”, password: “$rfmngr$”, which is the default password for Websense RiskFilter (see p.10 of the manual).

There were, however, some more interesting attempts. One category was passwords far too complicated to be in a standard password dictionary, or even found through offline-brute-force attacks on a hashed password database (e.g. “TiganilAFloriNTeleormaN”, “Fum4tulP0@t3Uc1d3R4uD3T0t!@#$%^%^&*?”, and “kx028897chebeuname+a”). The best guess is that these passwords were collected from an unhashed password database, or from a trojaned SSH server or client. Theo Markettos identified a likely source for this password database. Other odd password attempts include plain hashes (e.g. E4F89B211D997C1D5ECCE2153DC9184A which is the MD5 of “upintheair”, found by Google), salted hashes (e.g. $1$EdkQIoSn$T3gzKLxlcxF7tsTCFqC8M) and filenames (e.g. “/var/run/sshd22.pid” and “/var/run/sshd”).

One conclusion which can be drawn is that this attacker does not care enough about the quality of the password database to filter out passwords which it makes almost no sense to use. This carelessness is supported by the fact that after I initially enabled my patched SSH server, I received many log-in attempts but no passwords. It turned out that the default FreeBSD configuration is to only support keyboard-interactive authentication, rather than the more limited password authentication. The brute force attack tool only attempted password authentication, and therefore was always rejected before any password was sent, so the attack was running for days without ever having a hope of succeeding. I did enable password authentication, but some later attacks, presumably using a different tool and probably from a different attacker, attempted both keyboard-interactive and password authentication.

One attack I hadn’t seen before was to try a large number of usernames, and parts of the hostname as password. For a hostname of the style MACHINE.DOMAIN.DEPARTMENT.cam.ac.uk, the attack tried DOMAIN, DOMAIN.DEPARTMENT, MACHINE, then MACHINE.DOMAIN. This clearly isn’t a dictionary but a bit of custom code which did a reverse DNS lookup on this host then generated some possible passwords. Using the hostname as a password for a host isn’t a good idea, but I can imagine some sysadmins doing so. The fact that some attackers are taking this approach might merit some explicit statement in password selection guidance.

Another curious trend was receiving meta-data as username/passwords. This might be due to the brute force tool not properly interpreting comments in the dictionary file, or the attacker not understanding the comment notation. For example I received the following username/passwords:

  • [uratu/was HERE]
  • [I`m/A HaCkER ON]
  • [This/Is A Blow ShiT]
  • [acest/este:varza]
  • [data.conf/contzine]
  • [peste=6.000/de:usere]
  • [setate/=<SweetSoul>
  • [checking/SweetSoul]\par

It looks like the attacker thinks that square brackets are comment notation, but the brute force tool simply sends the text as SSH username/password pairs. There also seems to be a Romanian language connection. For example, “acest este varza” according to Google means “this is cabbage” (perhaps an idiom), “contzine” means “list any”, “peste de usere” means “over the user”, “setate” means “set”. The Romanian connection also came up in the previous post where Romanian for “Handbook of Mechanical Engineering” was tried as a password.

Attentive readers will note the “\par” in the above list perhaps indicating that the file was converted to RTF at some point. This appears indeed to be the case from the later attempt of username: “\*\generator”, password: “Msftedit 5.41.21.2508;}…[checking uratu]\par”. From this we can also conclude that the attacker is using Windows WordPad.

Overall it was an interesting experiment, with some conclusions confirmed but a few surprises. However, this was only a two week experiment on a single machine, so care should be taken in drawing generalisations which assume that these results are typical.

20 thoughts on “Observations from two weeks of SSH brute force attacks

  1. Most of your examples have a connection to the Romanian language:

    – TiganilAFloriNTeleormaN => is translated as gipsies at Florin Teleorman, where Florin is a common name, and Teleorman is a county from the southern part of Romania.
    – Fum4tulP0@t3Uc1d3R4uD3T0t! => “fumatul poate ucide rau de tot” in leet speak, which roughly translates as “smoking can kill pretty ugly”
    – uratu => “the ugly”
    – if you take it as a whole, “data.conf/contzine peste=6.000/de:usere” makes sense as sentence which translates to “data.conf contains over 6000 users”

    Most probably they were just skiddies which at most are just a nuisance for filling up the logs with garbage. Most of these attacks can actually be defeated by changing the SSH port. That confuses the crap out of them. As experiment, I actually did that. Failed login attempts: 0. I guess their skiddie training classes don’t have a nmap module. Bummer.

  2. Some more Romanian words:

    TiganilAFloriNTeleormaN:
    1. Florin is a common Romanian first name
    2. Tiganila is probably the surname
    3. Teleorman

    “Fum4tulP0@t3Uc1d3R4uD3T0t!@#$%^%^&*?” is leet speak for “Smoking can kill really hard!” + some random punctuation

  3. @SaltwaterC: about 5 months ago I had an SSH server compromised while sitting on a very nonstandard port. It was running an old, unpatched debian…

    Just to say that port shifting is not a silver bullet, but i’m sure you knew that already.

  4. With a single low-end dedicated server somewhere, you can float around 20k active connections / tcp probes a second without breaking a sweat.

    But since each server you find will only handle so many ssh connections at a time, you’re never going to exhaust the entire password space on you.

    Instead, you rely on the fact that there are probing around 20k servers at a time per attack server. If only 1 probe in 100k has success, that’s still one new server every 5 seconds or so, per attack server.

  5. “Just to say that port shifting is not a silver bullet, but i’m sure you knew that already.”

    Certainly this is no “cure” but I was amazed how it cut down on attempts on a server I manage.

    The server was configured to have ssh NOT allow passwords at all, only pub keys. Yes, its a pain.

    But the number of attempts on 22 was so high the system was archiving the security logs daily! So to avoid missing useful security log information, we decided to move the port just to make the logs readable!

    It worked.

  6. KnockD is the solution to this problem in my experience. Keep port 22 closed, and only open to a specific IP temporarily after receiving an agreed series of port “knocks” from that IP. Pretty easy to set up and clients exist for several platforms including mobile. Now my sshd gets zero brute-force attacks 🙂

  7. I find tarpitting very successful as well. It’s also not a silver bullet, but it drastically cuts down on the number of attempts a single box can make. Unless the attacker’s boxes are all in communication with each other (this seems unlikely), each attack needs to start from scratch and I see mostly dups.

    Netfilter tarpitting doc

  8. I’m pretty sure the romanian passwords and/or scriptkiddies use the same tools that were available back in the days. Probably ssh scans and/or ftp ones are left in the background logging like they used to do.

    I’d be curious about a central location they tend to keep their stuff (an email or some free webhosting) and act on’em, providing the police with some logs and such. I might be of help if you need any 😉

  9. off topic rookie here….you u think brute force attack got my password on facebook….i was hacked and they messed with my account…how they do that???

  10. @whiskey A couple options scream most common – you were compromised via email or dropper and executed the installation of a trojan on your machine, OR, someone used a XSS scripting method to hi-jack your credentials in quick ‘s-kiddie’ fashion.

    Additionally, it could just be your friends or an ex who knew you and the fact your password may be “12345”.

    @S. Murdoch, cool study. I think a worthwhile experiment would be to try to make your box seem more important than it is and see if someone other than s-kiddies come knocking.

    A.

  11. Hello Steven. Very nice blog posts. Since you are interested in the subject I would recommend that you set up a SSH honeypot. I’m currently blogging about them and use them in various forms. Please take a look at “Kippo SSH honeypot”. I have found it to be the easiest script to setup. It can log everything you mentioned, plus IPs, input (if system is compromised), it downloads and saves locally the files that attackers wget inside the system, etc. Let me also do some advertisting here 😛 I have written a script called Kippo-Graph [ http://bruteforce.gr/kippo-graph ] that visualizes all these data and produces around 30 graphs, extracts geolocation data from IPs, shows TTY commands etc. So you can see results of the SSH attacks like this: [ http://bruteforce.gr/wp-content/uploads/kippo-graph-0.6.1-gallery-DEMO.png ] and this: [ http://bruteforce.gr/wp-content/uploads/kippo-graph-0.6.1-geo-DEMO.png ]. I think you would find it useful, and I would of course love some feedback. Take care and keep us updated, I have followed you on Twitter as well. BTW, Romania-originated probes and especially tools downloaded inside the system (for other attacks) are mostly what I encouter as well.

  12. It seems a plausible effort to set up a honey pot – but who has time for that. Port switching works for cutting down but I’ve found that after a few months the attempts start there too. MOST, like 99.9% of these servers are Chinese – (which with the existence of the great china firewall makes me feel like it is state sponsored but I digress) Perhaps what we need is a real time list of servers attempting these attacks, then we can subscribe and black list much like the email RBL’s – as for tagging compromised servers, once they “cool down” we could simply delete their entries. Anyone working on this? I would be happy to contribute.

  13. I set up a server back in October. Its main duty now is just logging ssh attempts. I use denyhosts and then geolocate the ip. My site rackcity.dyndns.biz lists the attackers and maps them in Google maps. Its just a small self project. Thought some of you here might like it.

  14. As an experiment…go here: http://www.apnic.net/publications/research-and-insights/ip-address-trends/apnic-resource-range

    then IPTABLE drop everyone of those subnets, and note the frequency of your bruteforce attacks.

    (Be great if firewall/routers came with a little button, ‘CULL APNIC.’)

    On a development server/public cloud instance, I wrote some scripts to scrape login failures from auth.log, then count unique IPs, report frequency for each, and sort. (A separate script does same for successes.) I blackisted entire subnets (x.x.x.x/8) for brute force attacks. After about a week or so of this, I noted that, from the list above for APNIC, I had blacklisted all but 3 of the subnets..and the brute force attacks had dropped to next to nil. What remained were weak/half hearted attacks(3-6 attempts with standard default credentials). When I source checked these attacks, in order of frequency, APNIC was way at the top of the list, then RIPE, then … all the rest of the world way down in the noise. TIny few from AFRICA, tiny few from CARIBBEAN, and then next to nothing from US/CANADA, clustered in Ashburn,VA and Orem, UT. (EIther Verizons GNOCC in Ashburn, or Amazon Cloud, or some other cloud, explaining the clustering.)

    Whitelist whenever possible seems like the way to go to cull the noise.

Leave a Reply

Your email address will not be published. Required fields are marked *