Despite my (repeated) warnings, the implementation of the new cookie legislation caught our technical gurus with their figurative pants down. Some disclaimers were cobbled together quickly and posted up.
But it seems we're doing a much better job than Paypal.
Don't get me wrong - I know that (done properly) cookies greatly enhance the security of interaction - indeed I'd be very wary of a payment processing system that didn't implement cookies - but then I'm also very wary of payment processors whom don't implement the law.
Bought some stuff today from Paypal - the first time I'd used Paypal for several months - no mention of cookies. It actually dropped no less than 40 cookies on my browser! FFS! No mention of cookies let alone an opt-in. And while the majority of the cookies were session cookies, it's interesting to note that Paypal DOES NOT END the session at the completion of payment. That's right. after paying for goods via paypal and returning to the orignal site, you are still logged into Paypal!
"We use cookies written with Flash technology to help prevent fraud "
Are the criminal classes still that incompetent that don't know how to get around evercookie?
To top it all, it seems that Paypal now stuff more data into their cookies than they are willing to consume. after I wrote this post I went back to double check if there was any effort to comply with The Privacy and Electronic Communications (EC Directive) Regulations 2003, only to get failures accessing https://www.paypal.com/ with an error message indicating that I was returning more cookies than it could cope with:
(the green bar was added by me to redact my data). It seems that someone thought it a good idea to transfer transactional information via cookies.
Just now the news is full of the arrogance and incompetence of the financial institutions -
to a certain extent I have reserved judgement. But increasingly it
appears that the IT operations are run by the CEOs nephew who once read a
book about programming - OK maybe I exaggerate. There are some very competent and brilliant people out there, some of whom I'm acquainted with - it just seems that the less able IT people of my ken seem to be in the well paid jobs. Grrrrrr!
Monday, 9 July 2012
Friday, 22 June 2012
Very Non Cooperative (really)
The powers that be at my work have decided to revisit the issue of BYOD / external access. I'd previously implemented solutions at two previous employers so thought I'd be able to come up with something suitable here.
In both the previous exercises, I'd arrived at the conclusion that the easiest way to implement this was in terms of services - and relying on open protocols such as HTTP, SMTP, telnet etc (before you reach for your guns, the telnet thing was due to a requirement for dg200 terminal emulation- and I couldn't get a client which would run over ssh - so the telnet was encapsulated in SSL using stunnel). However this project is a bit different for various reasons - not least a very real concerns that the users will leave their computers on trains and park benches. While others in the office had previously come up with horribly complicated encryption schemes - this is a nightmare to support. So after a good deal of thought I realized I could solve a whole load of problems at one stroke by using a remote window / desktop protocol. VNC was the obvious choice due to the wide availability of clients. And I'd previously implemented lots of VNC servers on Linux and on MSWindows - it was always a no-brainer. I did have cunning plans for dealing with small screen real estate, keyboard-less devices etc - but best to start with little steps.
So I fired up synaptic on my PCLinuxOS desktop and installed TightVNC server and a client. Running the server from the command line, it works (but obviously no window manager, and the simple standalone VNC auth). So I set it ll up to run through xinetd (similar to this), re-configure kdm and xfs, fire up a client, enter my username and password - "the server has closed the connection". Check firewall - no problem. Check logs - nothing there. Double check my config changes - all OK. Just to make sure I do clean reboot. Still not working. Check the man page - what's happenned to all the X integration stuff? Gone! No xdm support!
Next I tried RealVNC direct from the RealVNC website - got a licence - read the docs....no more inetd support? WTF? The only logical reason I can think of for this is that they want to enforce their licence terms. Still, I could live with that for the POC - who knows, we might even end up paying for licences for the service - in return for support. But every time I connected to localhost, I got "user not recognised or password was blank". RealVNC do say on their website that this can be an issue on some versions of Linux - and the solution is to disable PAM authentication (a bit weird since they say elsewhere that it is not available in the 'free' version). So I updated the configs, restart the server, to no avail. Tried various tweaks and fettling. Checked the firewall. Nothing. Oh, and there's no 'uninstall' functionality - so had to reverse engineer the installation to clean it up.
Have I got dumber with old age or is this another case where a good product has turned into bloatware?
Aaaaarrrrghhhh!
(some updates added as comments)
In both the previous exercises, I'd arrived at the conclusion that the easiest way to implement this was in terms of services - and relying on open protocols such as HTTP, SMTP, telnet etc (before you reach for your guns, the telnet thing was due to a requirement for dg200 terminal emulation- and I couldn't get a client which would run over ssh - so the telnet was encapsulated in SSL using stunnel). However this project is a bit different for various reasons - not least a very real concerns that the users will leave their computers on trains and park benches. While others in the office had previously come up with horribly complicated encryption schemes - this is a nightmare to support. So after a good deal of thought I realized I could solve a whole load of problems at one stroke by using a remote window / desktop protocol. VNC was the obvious choice due to the wide availability of clients. And I'd previously implemented lots of VNC servers on Linux and on MSWindows - it was always a no-brainer. I did have cunning plans for dealing with small screen real estate, keyboard-less devices etc - but best to start with little steps.
So I fired up synaptic on my PCLinuxOS desktop and installed TightVNC server and a client. Running the server from the command line, it works (but obviously no window manager, and the simple standalone VNC auth). So I set it ll up to run through xinetd (similar to this), re-configure kdm and xfs, fire up a client, enter my username and password - "the server has closed the connection". Check firewall - no problem. Check logs - nothing there. Double check my config changes - all OK. Just to make sure I do clean reboot. Still not working. Check the man page - what's happenned to all the X integration stuff? Gone! No xdm support!
Next I tried RealVNC direct from the RealVNC website - got a licence - read the docs....no more inetd support? WTF? The only logical reason I can think of for this is that they want to enforce their licence terms. Still, I could live with that for the POC - who knows, we might even end up paying for licences for the service - in return for support. But every time I connected to localhost, I got "user not recognised or password was blank". RealVNC do say on their website that this can be an issue on some versions of Linux - and the solution is to disable PAM authentication (a bit weird since they say elsewhere that it is not available in the 'free' version). So I updated the configs, restart the server, to no avail. Tried various tweaks and fettling. Checked the firewall. Nothing. Oh, and there's no 'uninstall' functionality - so had to reverse engineer the installation to clean it up.
Have I got dumber with old age or is this another case where a good product has turned into bloatware?
Aaaaarrrrghhhh!
(some updates added as comments)
Friday, 23 March 2012
Back online
Well, after some digging around I've gone with PCLinuxOS. Flightgear and Google Chrome are available on this. It's significantly faster than Fedora, even though I've switched back to a 32 bit OS (from x86_64). I'd previously installed PCLinuxOS on my daughter's laptop and been impressed with the results.

I've gone with larger fonts (must be getting older) than I had befoe - which combined with the bright colours of KDE makes it a bit cutesy - especially with the MacOS type window decorations.
Kmail still uses Akonadi - but unlike on Fedora,it's not using a mysql backend - and its not maxing out one the CPUs. Still has quite a big footprint - but again no apparent way to switch it off.
I fired up Thunderbird and added the ExportImportTools extension. This still does not support Maildirs - but it was simple to export my mailboxes from kmail and import tham in Thunderbird. The latter integrates rather well with KDE, but it is a bit ugly (unfortunately there are very few themes currently available for v11).
BTW for anyone else wanting to export a mailbox file from kmail...although there are some scripts out there on the internet to do this, if you've got a running kmail installation, all you need to do is select the emails, right click and 'save as...' to export a mailbox format file.
So I'll continue looking for an email client which supports multiple identities / SMTP and POP servers, is lighweight and not ugly. I suspect I'm going to have to wait for a nice Thunderbird theme. I suppose I could write one.....no, I've got enough half-baked projects for the time being.

I've gone with larger fonts (must be getting older) than I had befoe - which combined with the bright colours of KDE makes it a bit cutesy - especially with the MacOS type window decorations.
Kmail still uses Akonadi - but unlike on Fedora,
I fired up Thunderbird and added the ExportImportTools extension. This still does not support Maildirs - but it was simple to export my mailboxes from kmail and import tham in Thunderbird. The latter integrates rather well with KDE, but it is a bit ugly (unfortunately there are very few themes currently available for v11).
BTW for anyone else wanting to export a mailbox file from kmail...although there are some scripts out there on the internet to do this, if you've got a running kmail installation, all you need to do is select the emails, right click and 'save as...' to export a mailbox format file.
So I'll continue looking for an email client which supports multiple identities / SMTP and POP servers, is lighweight and not ugly. I suspect I'm going to have to wait for a nice Thunderbird theme. I suppose I could write one.....no, I've got enough half-baked projects for the time being.
Wednesday, 21 March 2012
What were they thinking
My home computer is a gateway to my data (and stuff other people have shared). So the number of X applications I use is short - OpenOffice, Firefox, kmail and konsole, occassionally xrdp and vnc. So I don't upgrade it very often. The last time was when Fedora 9 was shiny and new.
But I wanted to install Flightgear for my son. Rather than go through all the hassle of trying to build it myself, I thought I'd just use binary packages - but none available for Fedora 9. Since it has been a while since I upgraded my operating system I thought I'd just bite the bullet and upgrade to the current Fedora release (16).
A bit of googling and I read about pre-upgrade - this seems like an easy way to upgrade. How wrong was I. It downloaded lots of stuff then rebooted into Anaconda - then stopped - "What type ofmedia contains the installation tree?" - my hard disk. Which hard disk? Again not a problem. What's the path? more reboots and googling and I had a path. Path is invalid. So then I cleaned up all the mess preupgrade had left behind on my disk and tried to upgrade using a CD. You can't upgrade from a CD. Eventually I found the installation DVD iso. Burned a copy and rebooted. "You can only upgrade from the previous 2 releases of fedora". Grrrr!
Time for bed.
Next day. My /home is on a seperate partition and fortunately I had plenty of spare disk. It has been runing on reiserfs for the past 10 years or so - so I created a copy on top of ext4. Then installed Fedora 16, copied over the passwd/shadow/group data, added the ext4 /home to the fstab and rebooted. It seemed to work - but OMG, booting up takes a long time now.
The colours in KDE made some of the text unreadable - fixed that. Load was high - so I found and disabled nepomunk (desktop search engine - no I don't need that). Firefox and OpenOffice running OK. Then I started kmail. Oh dear. The load on my machine went through the roof. WTF is mysqld doing running? I didn't know that kmail now insists on using Anakondi for something - I'm not sure what - certainly I'm pretty sure I don't need it. More googling....apparently since v4.4 of kmail you can disable Anakondi. OK, how do I do that? Use the advanced tab in System Settings. What advanced tab? There is none!
It seems I'm not the only one to be very disappointed in KDE's bloatware.
If I wanted my PC to run very, very slow I would have installed Microsoft Windows on it. I've now spent nearly as much time trying to turn this into a useful computer as I did with MSWindows Vista on my daughter's laptop.
Oh, and kmail failed to important my configuration for SMTP (but didn't think to tell me) and decided to use the local sendmail instead (which I cannot be bothered configuring to handle authentication and TLS nevermind the complications of SPF).
So now wondering if I should switch to different window manager / desktop, or even a different distribution.
Going to a different distribution offers some advantages and will be just as painful as changing window manager. The big issue is migrating my kmail email database and having something which is likely to continue supporting flash for some time. The latter probably means something I can run Google Chrome in. Which narrows down the choices a LOT. I've never been a fan of the way Ubuntu manages permissions - the root user is there for a REASON. Gentoo is fast - and it will run Flightgear and Google Chrome...but it seems like very hard work. Mandriva seems to still be a popular choice - and there are lots of binary rpms for flightgear. However I think Centos looks like a safe option.
But I wanted to install Flightgear for my son. Rather than go through all the hassle of trying to build it myself, I thought I'd just use binary packages - but none available for Fedora 9. Since it has been a while since I upgraded my operating system I thought I'd just bite the bullet and upgrade to the current Fedora release (16).
A bit of googling and I read about pre-upgrade - this seems like an easy way to upgrade. How wrong was I. It downloaded lots of stuff then rebooted into Anaconda - then stopped - "What type ofmedia contains the installation tree?" - my hard disk. Which hard disk? Again not a problem. What's the path? more reboots and googling and I had a path. Path is invalid. So then I cleaned up all the mess preupgrade had left behind on my disk and tried to upgrade using a CD. You can't upgrade from a CD. Eventually I found the installation DVD iso. Burned a copy and rebooted. "You can only upgrade from the previous 2 releases of fedora". Grrrr!
Time for bed.
Next day. My /home is on a seperate partition and fortunately I had plenty of spare disk. It has been runing on reiserfs for the past 10 years or so - so I created a copy on top of ext4. Then installed Fedora 16, copied over the passwd/shadow/group data, added the ext4 /home to the fstab and rebooted. It seemed to work - but OMG, booting up takes a long time now.
The colours in KDE made some of the text unreadable - fixed that. Load was high - so I found and disabled nepomunk (desktop search engine - no I don't need that). Firefox and OpenOffice running OK. Then I started kmail. Oh dear. The load on my machine went through the roof. WTF is mysqld doing running? I didn't know that kmail now insists on using Anakondi for something - I'm not sure what - certainly I'm pretty sure I don't need it. More googling....apparently since v4.4 of kmail you can disable Anakondi. OK, how do I do that? Use the advanced tab in System Settings. What advanced tab? There is none!
It seems I'm not the only one to be very disappointed in KDE's bloatware.
If I wanted my PC to run very, very slow I would have installed Microsoft Windows on it. I've now spent nearly as much time trying to turn this into a useful computer as I did with MSWindows Vista on my daughter's laptop.
Oh, and kmail failed to important my configuration for SMTP (but didn't think to tell me) and decided to use the local sendmail instead (which I cannot be bothered configuring to handle authentication and TLS nevermind the complications of SPF).
So now wondering if I should switch to different window manager / desktop, or even a different distribution.
Going to a different distribution offers some advantages and will be just as painful as changing window manager. The big issue is migrating my kmail email database and having something which is likely to continue supporting flash for some time. The latter probably means something I can run Google Chrome in. Which narrows down the choices a LOT. I've never been a fan of the way Ubuntu manages permissions - the root user is there for a REASON. Gentoo is fast - and it will run Flightgear and Google Chrome...but it seems like very hard work. Mandriva seems to still be a popular choice - and there are lots of binary rpms for flightgear. However I think Centos looks like a safe option.
Wednesday, 14 March 2012
Browser fingerprinting
I'm currently spending a significant amount of my time on fraud investigations at work. I've written some code which collates logs and transactional data then mashes it up to find patterns. It is accurately predicting the majority of the fraud. (I keep telling my boss I want to work on commission but so far I'm stuck with a salary). Although this is a huge leap forward from the position before I was involved, I'd like to reduce the remaining losses further.
My program relies heavily on IP addresses to identify individual client devices while the groups carrying out the fraud are mostly using mobile dongles or proxying via vulnerable webservers in an effort to obscure their identity. So I've been looking at alternative methods for identifying whom is at the far end of the wire.
(I should point out that in order to carry out a transaction on our system, the users must authenticate themselves therefore anonymity is not an issue for our legitimate users).
While evercookie looks to be ideal for our purposes, it's high profile means that our attackers may be specifically on the lookout for this - as well as the risk that it may be detected as malware by legitimate users with the right software. And despite the fact that our legitimate users have a thouroughly verified identity, I think undermining the security of their computers to be a step too far. The methods described in the Panopticlick project seem to be a more appropriate so I've been looking at these in some detail.
Which User Agent?
The obvious starting point is the user-agent. From the data I have already, the common user-agent suggests that I can link transactions from different IP addresses. They do change over time. Google Chrome is particularly troublesome - it appears to upgrade itself on the fly - even mid-session! And of course most browsers have tools for easily switching the user-agent reported in Javascript and in HTTP requests.
The only people publishing stats regarding faked user agents are, not surprisingly, people developing code in this area - and their sites are more likely to be visitied by technically sophisticated users, deliberately trying to test out the detection. I think it's reasonable to surmise that faking of user agents in the wild is relatively rare - so if I can reliably detect a faked user agent, knowing what the real user agent is does not help significantly with generating a unique fingerprint. A further consideration, is that even were this possible, sending the real user-agent back serverside increases the visibility of the fingerprinting process to an attacker.
It's worth noting that the navigator object has other properties / methods indicating the identity of the browser. Notably
appCodeName
appName
appVersion
platform
With the user-agent switcher on Safari, navigator.userAgent and navigator.appVersion matches the selected user agent in the switcher, and this is what is sent in the request. appName and appCodeName are always Netscape and Mozilla resp. However navigator.platform always reports win32, regardless.
With a Firefox user agent switcher, all the properties were changed.
Pedro Laguna provides a method for detecting the browser type by using the text of javascript exception messages. I'd previously found such an approach to be very effective when fingprinting SMTP servers, so I was optimistic that this could be used to detect most instances of UA faking. However although it works up to a point, it can produce nasty security warnings in some browsers and I had trouble accurately detecting MSIE v6 and Google Chrome. YMMV.
Robert Accettura has a nice writeup of the detection implemented in jquery, Prototype and
Yui which parse the user agent string while Mootols uses feature detection. The Mootools implementation only differentiates between different suppliers of browsers - not between versions of browser from the same supplier.
A paper by Mowery, Bogonerif, Yilek and Shacham describes a methodology for identifying browsers based on the javascript execution characterisitics. However they don't publish the exact code they used for their fingerprinting. I'm also sceptical of how effective the resolution would be on a wide variety of client machines running other applications concurrently - and without access to their code it'd be a lot of effort to test myself.
The engimatic Norbert proposes variations in Javascript parsing metrics (via arguments.callee.toString().length ) - this led me to some more specific articles on the subject, notably those by Bojan Zdrnja on SANS (1)(2)
Again it differentiates between parsing engines families rather than individual versions. Using a test script I got the following values:
115 - Safari 3
116 - Firefox 10
187 - Chrome 5, MSIE 6
Another approach is to simply look at what functionality is exposed by the javascript API. In general, developers usually add features - they rarely get retired. However this seems to be a fairly effective approach for detecting specific versions of browsers. These pages
have some more details on feature detection.(3)(4)(5)
So based on my research I used feature detection as the primary driver for my user agent checker, but did have to fall back on the Javascript parsing metrics to apply Firefox specific tests. My script includes the parser check, screen size / depth, language as well as the availability of selected APIs to contribute to the fingerprint.
Fonts
Several of the published documents / code make reference to fonts supplied as a good indicator of variability. Most use Flash or Java to get a list of the fonts. Interestingly both seem to provide an unsorted list - so the ordering is determined by where they appear on the disk - adding more unique behaviour. Not having a development toolkit for either, meant I had limited scope for testing this myself - but I did come across some code written by Lalit Patel . Lalit renders a fixed string using a degrading font-selector using different degraded fonts - if the size of the rendered string is the same, then the system must have the prefered font available - neato!
Taking this one step further, if I have a list of fonts to check for, then I can build up a list of what's available. Of course if I look for, say, Arial on a MS Windows platform, Helvetica on MacOS or Vera on Linux it's not going to tell me very much - but on www.codestyle.org I found lists showing the less common fonts. While looking for the most common fonts doesn't add a lot of variability, looking for very rare fonts adds code with little yield - so I created a list of the fonts lying in between these extremes for my code.
Plugins
On Firefox and webkit based browsers, navigator.plugins provides names and versions of the browser plugins (Adobe Acrobat, Flash, Java etc). Iterating through this is simple. Although MSIE has a plugins property in navigator it is not populated. In order to get information about a plugin you need to create an instance of it. And there is no standard API in ActiveX objects to get version information.
Eric Gerds has written some code for getting information about common plugins, however he doesn't reveal much about his methods - and trying to reverse engineer the obfuscated javascript is a bit of a task.
On the builtfromsource blog (author does not seem to provide any identity information) there are examples of how to detect/get version information from some of the more common ActiveX plugins.
While the Timezone Offset is available to javascript (e.g. +0100) the actual time zone (e.g. Europe/London) contains a lot more information; but the latter is not available to Javascript. Phil Taylor reports that different Time zones with the same offset can have different dates on which daylight saving time is switched on. However his method does require a lot of computation on the browser - approx 15k date calculations. There is some scope for optimizing this though (e.g. only looking at last weeks of March and October).
Josh Fraser offers a rewritten script for detecting both the timezone offset and whether DST applies for the current TZ .
Does it work?
I've mentioned I did some testing: I wrote a script using feature detection, arguments.callee.toString().length, font detection and plugin detection (i.e. specifically not using the user Agent string) and ran it on some computers at work.
Where I work the computers are all installed from standard builds. So far, I've got 23 fingerprints from 24 (supposedly identical) machines - i.e. I've only got 2 machines returning the same fingerprint.
I was concerned that instantiating the activeX objects would have an impact on performance and/or be otherwise visible to the user. On my test script, creating instances of Acrobat, Flash, Realplayer and MSWindows Media Player took 2-3 seconds - so not terribly intrusive. In one case, the user got a warning message regarding Acrobat (he'd not previously accessed any PDF files since the system had been installed, the other plugins did not produce any visible warnings). The time taken for the remainder of the javascript to run is negligible.
Where's my code? Sorry, don't want to make it too easy for the bad guys to see what I'm doing. If you follow the links I've provided you'll get the same functionality with just a little cutting and pasting.
My program relies heavily on IP addresses to identify individual client devices while the groups carrying out the fraud are mostly using mobile dongles or proxying via vulnerable webservers in an effort to obscure their identity. So I've been looking at alternative methods for identifying whom is at the far end of the wire.
(I should point out that in order to carry out a transaction on our system, the users must authenticate themselves therefore anonymity is not an issue for our legitimate users).
While evercookie looks to be ideal for our purposes, it's high profile means that our attackers may be specifically on the lookout for this - as well as the risk that it may be detected as malware by legitimate users with the right software. And despite the fact that our legitimate users have a thouroughly verified identity, I think undermining the security of their computers to be a step too far. The methods described in the Panopticlick project seem to be a more appropriate so I've been looking at these in some detail.
Which User Agent?
The obvious starting point is the user-agent. From the data I have already, the common user-agent suggests that I can link transactions from different IP addresses. They do change over time. Google Chrome is particularly troublesome - it appears to upgrade itself on the fly - even mid-session! And of course most browsers have tools for easily switching the user-agent reported in Javascript and in HTTP requests.
The only people publishing stats regarding faked user agents are, not surprisingly, people developing code in this area - and their sites are more likely to be visitied by technically sophisticated users, deliberately trying to test out the detection. I think it's reasonable to surmise that faking of user agents in the wild is relatively rare - so if I can reliably detect a faked user agent, knowing what the real user agent is does not help significantly with generating a unique fingerprint. A further consideration, is that even were this possible, sending the real user-agent back serverside increases the visibility of the fingerprinting process to an attacker.
It's worth noting that the navigator object has other properties / methods indicating the identity of the browser. Notably
appCodeName
appName
appVersion
platform
With the user-agent switcher on Safari, navigator.userAgent and navigator.appVersion matches the selected user agent in the switcher, and this is what is sent in the request. appName and appCodeName are always Netscape and Mozilla resp. However navigator.platform always reports win32, regardless.
With a Firefox user agent switcher, all the properties were changed.
Pedro Laguna provides a method for detecting the browser type by using the text of javascript exception messages. I'd previously found such an approach to be very effective when fingprinting SMTP servers, so I was optimistic that this could be used to detect most instances of UA faking. However although it works up to a point, it can produce nasty security warnings in some browsers and I had trouble accurately detecting MSIE v6 and Google Chrome. YMMV.
Robert Accettura has a nice writeup of the detection implemented in jquery, Prototype and
Yui which parse the user agent string while Mootols uses feature detection. The Mootools implementation only differentiates between different suppliers of browsers - not between versions of browser from the same supplier.
A paper by Mowery, Bogonerif, Yilek and Shacham describes a methodology for identifying browsers based on the javascript execution characterisitics. However they don't publish the exact code they used for their fingerprinting. I'm also sceptical of how effective the resolution would be on a wide variety of client machines running other applications concurrently - and without access to their code it'd be a lot of effort to test myself.
The engimatic Norbert proposes variations in Javascript parsing metrics (via arguments.callee.toString().length ) - this led me to some more specific articles on the subject, notably those by Bojan Zdrnja on SANS (1)(2)
Again it differentiates between parsing engines families rather than individual versions. Using a test script I got the following values:
115 - Safari 3
116 - Firefox 10
187 - Chrome 5, MSIE 6
Another approach is to simply look at what functionality is exposed by the javascript API. In general, developers usually add features - they rarely get retired. However this seems to be a fairly effective approach for detecting specific versions of browsers. These pages
have some more details on feature detection.(3)(4)(5)
So based on my research I used feature detection as the primary driver for my user agent checker, but did have to fall back on the Javascript parsing metrics to apply Firefox specific tests. My script includes the parser check, screen size / depth, language as well as the availability of selected APIs to contribute to the fingerprint.
Fonts
Several of the published documents / code make reference to fonts supplied as a good indicator of variability. Most use Flash or Java to get a list of the fonts. Interestingly both seem to provide an unsorted list - so the ordering is determined by where they appear on the disk - adding more unique behaviour. Not having a development toolkit for either, meant I had limited scope for testing this myself - but I did come across some code written by Lalit Patel . Lalit renders a fixed string using a degrading font-selector using different degraded fonts - if the size of the rendered string is the same, then the system must have the prefered font available - neato!
Taking this one step further, if I have a list of fonts to check for, then I can build up a list of what's available. Of course if I look for, say, Arial on a MS Windows platform, Helvetica on MacOS or Vera on Linux it's not going to tell me very much - but on www.codestyle.org I found lists showing the less common fonts. While looking for the most common fonts doesn't add a lot of variability, looking for very rare fonts adds code with little yield - so I created a list of the fonts lying in between these extremes for my code.
Plugins
On Firefox and webkit based browsers, navigator.plugins provides names and versions of the browser plugins (Adobe Acrobat, Flash, Java etc). Iterating through this is simple. Although MSIE has a plugins property in navigator it is not populated. In order to get information about a plugin you need to create an instance of it. And there is no standard API in ActiveX objects to get version information.
Eric Gerds has written some code for getting information about common plugins, however he doesn't reveal much about his methods - and trying to reverse engineer the obfuscated javascript is a bit of a task.
On the builtfromsource blog (author does not seem to provide any identity information) there are examples of how to detect/get version information from some of the more common ActiveX plugins.
While the Timezone Offset is available to javascript (e.g. +0100) the actual time zone (e.g. Europe/London) contains a lot more information; but the latter is not available to Javascript. Phil Taylor reports that different Time zones with the same offset can have different dates on which daylight saving time is switched on. However his method does require a lot of computation on the browser - approx 15k date calculations. There is some scope for optimizing this though (e.g. only looking at last weeks of March and October).
Josh Fraser offers a rewritten script for detecting both the timezone offset and whether DST applies for the current TZ .
Does it work?
I've mentioned I did some testing: I wrote a script using feature detection, arguments.callee.toString().length, font detection and plugin detection (i.e. specifically not using the user Agent string) and ran it on some computers at work.
Where I work the computers are all installed from standard builds. So far, I've got 23 fingerprints from 24 (supposedly identical) machines - i.e. I've only got 2 machines returning the same fingerprint.
I was concerned that instantiating the activeX objects would have an impact on performance and/or be otherwise visible to the user. On my test script, creating instances of Acrobat, Flash, Realplayer and MSWindows Media Player took 2-3 seconds - so not terribly intrusive. In one case, the user got a warning message regarding Acrobat (he'd not previously accessed any PDF files since the system had been installed, the other plugins did not produce any visible warnings). The time taken for the remainder of the javascript to run is negligible.
Where's my code? Sorry, don't want to make it too easy for the bad guys to see what I'm doing. If you follow the links I've provided you'll get the same functionality with just a little cutting and pasting.
Sunday, 9 October 2011
Data quality vendor not interested in data quality?
CIFAS should be all things to all people. They provide a platform for members to share data about fraudulent transactions - and provide ways of protecting individuals against identity theft. All wrapped in a not-for-profit organisation.
But dig below the surface and all is not as it seems.
Part of the facilities they provide is protective registration. This means that either at your request or the request of a CIFAS member, they will place a notice on you credit records saying that when a credit application is made in your name or from your address, then there should be additional checks on the identity of the party applying.
This helps with the big problem of identity fraud; regaining control of an identity and preventing further abuse.
However go and have a Google for them. There seems to be an awful lot of people out there who are not being protected - they are being prevented from obtaining credit due to a CIFAS listing. So at best, CIFAS have failed to communicate what their policy is to their members.
But suppose you find yourself unfairly blacklisted by CIFAS. How do you go about correcting this? Surely CIFAS, who generate income from providing accurate information would not only take an active interest in resolving individual cases, but would also seek to monitor the reputation of their members' recommendations? Indeed according to the Data Protection Registrar, that is what they are obliged to do, regardless of their business model.
However according to the CIFAS website, issues regarding innappropriate/innaccurate registrations must be directed to the member company and ”CIFAS will not become involved in a dispute until the CIFAS Member has issued a Final Response letter.”
But dig below the surface and all is not as it seems.
Part of the facilities they provide is protective registration. This means that either at your request or the request of a CIFAS member, they will place a notice on you credit records saying that when a credit application is made in your name or from your address, then there should be additional checks on the identity of the party applying.
This helps with the big problem of identity fraud; regaining control of an identity and preventing further abuse.
However go and have a Google for them. There seems to be an awful lot of people out there who are not being protected - they are being prevented from obtaining credit due to a CIFAS listing. So at best, CIFAS have failed to communicate what their policy is to their members.
But suppose you find yourself unfairly blacklisted by CIFAS. How do you go about correcting this? Surely CIFAS, who generate income from providing accurate information would not only take an active interest in resolving individual cases, but would also seek to monitor the reputation of their members' recommendations? Indeed according to the Data Protection Registrar, that is what they are obliged to do, regardless of their business model.
However according to the CIFAS website, issues regarding innappropriate/innaccurate registrations must be directed to the member company and ”CIFAS will not become involved in a dispute until the CIFAS Member has issued a Final Response letter.”
Friday, 4 March 2011
UK Government website privacy abuse?
Anyone who knows me will not be surprised to hear that I think measuring user-experience and how users interact with your website is a very good idea. If you're in the business of trying to collect or analyse this information, then this post is addressed to you.
As I've often said, looking at the standard server-side logs can be very informative - but its only half the story. To get a better picture you need to go client-side. And that means Javascript. For many people / organisations, there just isn't the time or money to develop your own solution - and of course there are no end of vendors trying to flog their wares to you.
This post was prompted by a wasted hour investigating unusual patterns in referer stats. Where I work, phishing poses a very serious risk. Despite this, (and a large IT staff, dedicated security team. and an annual turnover well into the billions) there are no SPF records in our published DNS records! The referer stats for out customer facing website shows our logos appearing in lots of web-based email readers (including those from service providers who are known to validate SPF) - implying that it is more than just a risk. The is a shocking and absurd set of circumstances which I am still trying to resolve after 2 years.
However, that's not what this gripe is about.
This week I noticed a few referals from a very long URL starting with xxxxx.stcllctrs.com (where xxxxxx is the name of my employers parent organisation). The URL was not obviously an email reader. Dropping the URL into a browser returned a 200 response with no content. So I had a look at the root URL, http://xxxxx.stcllctrs.com/ Where I found the documentation for 'jsunpack' (http://jsunpack.jeek.org/dec/go) a tool 'designed for security researchers and computer professionals'. This is primarily a javascript code obfuscator. Interestingly, the URL for jsunpack seems to link to a form allowing people to report possible abuses of the tool - which has a record of its use at http://xxxxx.stcllctrs.com/ flagged as suspicious.
I then Googled for xxxxx.stcllctrs.com and found that our parents organisation had several references to this site, loading javascript files and NOSCRIPT content. Looking at the Javascript it was serving up, it was rather difficult to read (since it was obfuscated) but seemed to be doing strange things with cookies. The domain also appears in several ad blocking lists. Alarm bells started ringing!
Of course my employers make up for the quality of the security policy with the quantity of it - so I couldn't do proper whois lookup - but looking at tools on the web - this turned out to be a 16 bit subnet owned by Savvis.net. The name is registered with viatel.com. So both the netblock and DNS registration are effectively anonymous.
Obfuscated code, unusual URLs, cookie manipulation, anonymous hosting, greyware listings - DING DING DING!!!
Most of the whois services available online are provided by companies trying to sell registration services- the one I used initially did not provide any information about the registrant (and reformatted the content significantly so it looked like viatel was the registrant). But I eventually found another site (in Romania of all places!) which gave the registrant contact - speed-trap.com limited. This proved to be the Rosetta stone to unravelling what was really going on.
Speed-Trap appear to be a legitimate organisation providing web-usage monitoring services to companies. Surprisingly, they have a number of very high profile customers including direct.gov.uk, RBS, Axa and others. Yet they behave online like a script-kiddy - obfuscating their identity as well as code deployed to run in my browser, leaving other peoples hacking code
on their own website.
DirectGov have a link to their privacy policy on each and every page in their site (for the benefit of those from the colonies - DirectGov is the single, open access portal spanning all central government services in the UK). They clearly state they use javascript and cookies to record and analyse your usage of the site. They do not state that this information is processed by a third party. Indeed they go to unusual lengths to suggest that this information would only be shared with other bodies in extreme circumstances. RBS and http://www.axa.co.uk/privacy take a similar tack.
http://www.direct.gov.uk/en/SiteInformation/DG_020456
http://www.rbs.co.uk/global/f/privacy.ashx
http://www.axa.co.uk/privacy
From https://www.dephormation.org.uk/
"Intercepting, monitoring, eavesdropping, tapping communications requires legal authority, or consent from both parties to the communication."
Although there are some differences to BTs Phorm rollout (in that case, it was clear that Phorm were using the information for other purposes than just usage analysis) I find it very worrying that the UK government and several large financial institutions should be misleading their customers (or citizens) like this.
As I've often said, looking at the standard server-side logs can be very informative - but its only half the story. To get a better picture you need to go client-side. And that means Javascript. For many people / organisations, there just isn't the time or money to develop your own solution - and of course there are no end of vendors trying to flog their wares to you.
This post was prompted by a wasted hour investigating unusual patterns in referer stats. Where I work, phishing poses a very serious risk. Despite this, (and a large IT staff, dedicated security team. and an annual turnover well into the billions) there are no SPF records in our published DNS records! The referer stats for out customer facing website shows our logos appearing in lots of web-based email readers (including those from service providers who are known to validate SPF) - implying that it is more than just a risk. The is a shocking and absurd set of circumstances which I am still trying to resolve after 2 years.
However, that's not what this gripe is about.
This week I noticed a few referals from a very long URL starting with xxxxx.stcllctrs.com (where xxxxxx is the name of my employers parent organisation). The URL was not obviously an email reader. Dropping the URL into a browser returned a 200 response with no content. So I had a look at the root URL, http://xxxxx.stcllctrs.com/ Where I found the documentation for 'jsunpack' (http://jsunpack.jeek.org/dec/go) a tool 'designed for security researchers and computer professionals'. This is primarily a javascript code obfuscator. Interestingly, the URL for jsunpack seems to link to a form allowing people to report possible abuses of the tool - which has a record of its use at http://xxxxx.stcllctrs.com/ flagged as suspicious.
I then Googled for xxxxx.stcllctrs.com and found that our parents organisation had several references to this site, loading javascript files and NOSCRIPT content. Looking at the Javascript it was serving up, it was rather difficult to read (since it was obfuscated) but seemed to be doing strange things with cookies. The domain also appears in several ad blocking lists. Alarm bells started ringing!
Of course my employers make up for the quality of the security policy with the quantity of it - so I couldn't do proper whois lookup - but looking at tools on the web - this turned out to be a 16 bit subnet owned by Savvis.net. The name is registered with viatel.com. So both the netblock and DNS registration are effectively anonymous.
Obfuscated code, unusual URLs, cookie manipulation, anonymous hosting, greyware listings - DING DING DING!!!
Most of the whois services available online are provided by companies trying to sell registration services- the one I used initially did not provide any information about the registrant (and reformatted the content significantly so it looked like viatel was the registrant). But I eventually found another site (in Romania of all places!) which gave the registrant contact - speed-trap.com limited. This proved to be the Rosetta stone to unravelling what was really going on.
Speed-Trap appear to be a legitimate organisation providing web-usage monitoring services to companies. Surprisingly, they have a number of very high profile customers including direct.gov.uk, RBS, Axa and others. Yet they behave online like a script-kiddy - obfuscating their identity as well as code deployed to run in my browser, leaving other peoples hacking code
on their own website.
DirectGov have a link to their privacy policy on each and every page in their site (for the benefit of those from the colonies - DirectGov is the single, open access portal spanning all central government services in the UK). They clearly state they use javascript and cookies to record and analyse your usage of the site. They do not state that this information is processed by a third party. Indeed they go to unusual lengths to suggest that this information would only be shared with other bodies in extreme circumstances. RBS and http://www.axa.co.uk/privacy take a similar tack.
http://www.direct.gov.uk/en/SiteInformation/DG_020456
http://www.rbs.co.uk/global/f/privacy.ashx
http://www.axa.co.uk/privacy
From https://www.dephormation.org.uk/
"Intercepting, monitoring, eavesdropping, tapping communications requires legal authority, or consent from both parties to the communication."
Although there are some differences to BTs Phorm rollout (in that case, it was clear that Phorm were using the information for other purposes than just usage analysis) I find it very worrying that the UK government and several large financial institutions should be misleading their customers (or citizens) like this.
Subscribe to:
Posts (Atom)