NASA Man-in-the-Middle Attack: Why you should use proper SSL Certificates
A posting to pastebin, by a group that calls itself "Cyber Warrior Team from Iran", claims to have breached a NASA website via a "Man in the Middle" attack. The announcement is a bit hard to read due to the broken english, but here is how I parse the post and the associated screenshot:
The "Cyber Warrior Team" used a tool to scan NASA websites for SSL misconfigurations. They came across a site that used an invalid, likely self signed or expired certificate. Users visiting this web site would be used to seeing a certificate warning. This made it a lot easier to launch a man in the middle attack. In addition, the login form on the index page isn't using SSL, making it possible to intercept and modify it unnoticed.
Once the attacker set up the man in the middle attack, they were able to collect username and passwords.
Based on this interpretation, the lesson should be to stop using self signed or invalid certificates for "obscure" internal web sites. I have frequently seen the argument that for an internal web site "it is not important" or "too expensive" or "too complex" to setup a valid certificate. SSL isn't doing much for you if the certificate is not valid. The encryption provided by SSL only works if the authentication works as well. Otherwise, you never know if the key you negotiated was negotiated with the right party.
And of course, the log in form on the index page should be delivered via SSL as well. Even if the form is submitted via SSL, it is subject to tampering if it is delivered via http vs. https.
good old "OWASP Top 10" style lessons, but sadly, we still need to repeat them again and again. For a nice test to see if SSL is configured right on your site, see ssllabs.com .
Also, in more complex environments, you need to make sure that all of your SSL certificates are in sync. We recently updated SSL certificates, and forgot to update the one used by our IPv6 web server. (thnx Kees for pointing that out to us).
[1] http://pastebin.com/MFPMGZ4Z
[2] https://www.ssllabs.com
------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter
Why Flame is Lame
We have gotten a number of submissions asking about "Flame", the malware that was spotted targeting systems in a number of arab countries. According to existing write-ups, the malware is about 20 MB in size, and consists of a number of binary modules that are held together by a duct tape script written in LUA. A good part of the size of the malware is associated with its LUA interpreter.
If you ever find something like that using perl instead of LUA... maybe I did it. I love to tie together various existing binaries using perl duct tape. However, I am not writing malware... and any serious commercial malware writing company would have probably fired me after seeing this approach. Using LUA would probably not fair much better. "Real" malware is typically plugged together from various modules, but compiled into one compact binary. Pulling up a random Spyeye description shows that it is only 70kBytes large, and retails for $500. Whatever government contractor put together "Flame" probably charged a lot more then that. Like with most IT needs: If you run some government malware supply department, think going COTS.
Of course, "Flame" is different because it appears to be "government sponsored". Get over it. Did you know governments hire spies? People who get paid big bucks (I hope) to do what can generally be described as "evil and illegal stuff". They actually do that for pretty much as long as governments exist, and McAfee may even have a signature for it.
We are getting a lot of requests for hints on how to detect that your are infected with Flame. Short answer: If you got enough free time on your hand to look for "Flame", you are doing something right. Take a vacation. More likely then not, your time is better spent looking for malware in general. In the end, it doesn't matter that much why someone is infecting you with the malware d'jour. The Important part is how they got in. They pretty much all use the same pool of vulnerabilities, and similar exfiltration techniques. Flame is actually pretty lame when it comes to exfiltrating data as it uses odd user-agent strings. Instead of looking for Flame: Setup a system to whitelist user-agents. That way, you may find some malware that actually matters, and if you happen to be infected with Flame, you will see that too.
But you say: Hey! I can't whitelist user-agents! Sorry: you already lost. On a good note: scrap that backup system. All your important data is already safely backed up in various government vaults. (recovery is a pain though... )
Sorry for the rant. But had to get it out of the system. Oh... and in case you are still worried... the Iranian CERT got a Flame removal tool [2]. Just apply that. I am sure it is all safe and such.
[1] http://www.symantec.com/security_response/writeup.jsp?docid=2010-020216-0135-99
[2] http://certcc.ir/index.php?name=news&file=article&sid=1894
------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter
SCADA@Home: Your health is no secret no more!
One of my interest recently has been what I call "SCADA@Home". I use this term to refer to all the Internet connected devices we surround ourself with. Some may also call it "the Internet of devices". In particular for home use, these devices are built to be cheap and simple, which hardly ever includes "secure". Today, I want to focus on a particular set of gadgets: Healthcare sensors.
Like SCADA@Home in general, this part of the market exploded over the last year. Internet connected scales, blood pressure monitors, glucose measurement devices, thermometers and activity monitors can all be purchased for not too much money.I personally consider them "gadgets", but they certainly have some serious health care uses.
I will not mention any manufacturer names here, and anonymized some of the "dumps". The selection of devices I have access to is limited and random. I do not want to create the appearance that these devices are worse then their competitors. Given the consistent security failures I do consider them pretty much all equivalent. Vendors have been notified.
There are two areas that appear to be particularly noteworthy:
- Failure to use SSL: Many of the devices I looked at did not use SSL to transmit data to the server. In some cases, the web site used to retrieve the data had an SSL option, but it was outright difficult to use it. (OWASP Top 10: Insufficent Transport Layer Protection)
- Authentication Flaws: The device does use weak authentication methods, like a serial number. (OWASP Top 10: Broken Authentication and Session Management)
First of all, there are typically two HTTP connections involved: The first connection is used by the device to report the data to the server, in some cases, the device may retrieve settings from the server. The second HTTP connection is from the users browser to the manufacturers website. This connection is used to review the data. The data submission uses typically a web service. The web sites themselves tend to be Ajax/Web 2.0 heavy with the associated use of web services.
The device is typically configured by connecting it via USB to a PC or to a Smartphone. The smart phone or desktop software would provide a useable interface to configure passwords, a problem that is common for example among bluetooth headsets which don't have this option. Most of the time, the data is not sent from the device itself, but from a smartphone or desktop application. The device uploads data to the "PC", then the PC submits the data to the web service. This should provide access to the SSL libraries that are available on the PC. In a few cases, the device sends data directly via WiFi. In the examples I have seen, these devices still use a USB connection to configure the device from a PC.
Example 1: "Step Counter" / "Activity Monitor"
The first example is an "activity monitor". Essentially a fancy step counter. The device clips on your belt, and sends data to a base station via an unspecified wireless protocol. The base station also doubles as a charger. The user has no direct control over when the device uploads data, but it happens frequently as long as the device is in range of the base station. Here is a sample "POST":
POST /device/tracker/uploadData HTTP/1.1 Host: client.xxx.com:80 User-Agent: Content-Length: 163 Accept: */* Accept-Language: en-us Accept-Encoding: gzip, deflate Content-Type: application/x-www-form-urlencoded Cookie: JSESSIONID=1A2E693AD5B28F4F153EE9D23B9237C8 Connection: keep-alive beaconType=standard&clientMode=standard&clientId=870B2195-xxxx-4F90-xxxx-67CxxxC8xxxx&clientVersion=1.2&os=Mac OS X 10.7.4 (Intel%2080486%10)
The session ID appears to be inconsequential, and the only identifier is the client ID. Part of the request was obfuscated with "xxx" to hide the identity of the manufacturer. The response to this request:
<?xml version="1.0" ?> <xxxClient version="1.0"> <response host="client.xxx.com" path="/device/tracker/dumpData/lookupTracker" port="80" secure="false"></response> <device type="tracker" pingInterval="4000" action="command" > <remoteOps errorHandler="executeTillError" responder="respondNoError"> <remoteOp encrypted="false"> <opCode>JAAAAAAAAA==</opCode> <payloadData></payloadData> </remoteOp> </remoteOps> </device> </xxxClient>
It is interesting how some of the references in this response suggest that there may be an https option. For example in line 5: port="80" and secure="false" may indicate an HTTPS option.
Example 2: Blood Pressure Sensor
The blood pressure sensor connects to a smart phone, and the smart phone will then collect the data and communicate with a web service. The authentication looks reasonable in this case. First, the smart phone app sends an authentication request to the web service: GET /cgi-bin/session?action=new&auth=jullrich@sans.edu&hash=xxxxx& duration=60&apiver=6&appname=wiscale&apppfm=ios&appliver=307 HTTP/1.1
The "hash" appears to be derived from the user provided password and a nonce that was sent in response to a prior request. I wasn't able to directly work out how the hash is calculated (which is a good sign) and assume it is a "Digest like" algorithm. Based on the format of the hash, MD5 is used as a hashing algorithm, which isn't great, but I will let it pass in this case.
All this still happens in clear text, and nothing but the password is encrypted. The server will return a session ID, that is used for authentication going forward. The blood pressure data itself is transmitted in the clear, using proprietary units, but I assume once you have a range of samples, it is easy to derive them:
action=store&sessionid=xxxx-4fc6c74e-0affade3&data=* TIME unixtime 1338427213 * ID mac,hard,soft,model 02-00-00-00-xx-01,0003000B,17,Blood Pressure Monitor BP * ACCOUNT account,userid jullrich@sans.edu,325xxx * BATTERY vp,vps,rint,battery %25 62xx,53xx,77xx,100 * RESULT cause,sys,dia,bpm 0,137xx,90xx,79xx * PULSE pressure,energy,centroid,timestamp,amplitude x1x4,x220,6xx98,1x60x,x9 x6x9,x450,6xx58,2x58x,x0 x4x9,x086,6xx02,2x12x,x1
(some values are again replaced with "x" . In case you wonder... BP was a bit high but ok ;-)
In my case, the device sent a total of 12 historic values, in addition to the last measured value. So far, I only had taken 12 measurements with the device.
Associated web sites
The manufacturers of both devices offer web sites to review the data. Both use SSL to authenticate, but later bounce you to an HTTP site, adding the possibility of a "firesheep" style session hijack attack. For the blood pressure website, you may manually enter "https" and it will "stick". The activity monitor has an HTTPS website, but all links will point you back to HTTP. A third device, a scale, which I am not discussing in more detail here as it is very much like the blood pressure monitor, suffers from the same problems.
A quick summary of the results:
Device Authentication | Data Encryption | Website Auth SSL | Website Data SSL | |
Blood Pressure Sensor | encrypted password | none | login only | only if user forced |
Activity Monitor | device serial number | none | login only | hard for user to force |
I have no idea if HIPAA or other regulation would apply to data and devices like this. Like I said, these are "gadgets" you would find in a home, not in a doctor's office. I also tested a scale that was very much like the blood pressure monitor. It used decent authentication but no SSL. If you have any devices like this, let me know if you know how they authenticate and/or encrypt.
So how bad is this? I doubt anybody will be seriously harmed by any of these flaws. This is not like the wireless insulin pumps or infusion drips that have been demonstrated to be weak in the past. However, it does show a general disrespect for the privacy of the user's data, and an unwillingness to fix pretty easy to fix problems.
-----
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter
Comments