Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Tuesday, 22 July 2014

Browser Fingerprinting - digging further into the client

I previously wrote about some techniques for Browser Fingerprinting (or "Device Identification" as it's known in some circles). Today I came across an interesting technique already in widespread use which detects variations between devices by looking at how content is rendered by WebGL / HTML5 Canvas.

Typically as much of the processing for these as possible is pushed onto the underlying hardware, resulting in consistent results for a given device independently of the OS / software. There is a surprising amount of variation here. However there's not sufficient variation for it to be used in isolation from other methods.

Update: 28 Jan 2015

Another interesting article found here lists Google Gears detection and MSIE security Policy as enumerable interfaces for fingerprinting. (The TCP/IP parameters is presumably done serverside, while proxy detection uses Flash). But the really interesting bit is that 2 of the products tested tried to hookup with spyware on the client!

Tuesday, 1 October 2013

Daily Mail Fail


What looked like an interesting link appeared in my inbox the other day, so I followed it to read the article. The link in question was to a page on the www . thisismoney . co . uk site - owned and operated by the Daily Mail and proud to describe itself as "Financial Website of the year".

I did not expect the Daily Mail to let the facts get in the way of a good story – and this did little to improve my impression of them, however I was surprised at how poor the performance was....and then discovered how poor they really were at IT services.

I noticed that the content continued to load for some time after landing on the page.

Broadbandspeedchecker.co.uk clocks my download speed at 44.95 Mb/s, not bad, although the latency from Maidenhead seems high at 168ms RTT. But the page from the Daily Mail took 47.42 seconds to get to the onload event then continued downloading stuff for a further 42 seconds: 1 minute and 19 seconds to download a single page?

There was only 1.4Mb of data in total, but split across no less than 318 requests across 68 domains, including 12 404s from *.dailymail.co.uk, erk!

But digging further I found that the site did not just perform badly – it's probably illegal.

In addition to (what appears to be) the usual 4 Google Analytics cookies, my browser also acquired session cookies from .thisismoney.co.uk, .rubiconproject.com, b3-uk.mookie1.com (x2), .crwdcntrl.net (x2) and.......129 cookies with future expiry dates.

FFS!

(a full list appears below)

For the benefit of any readers outside the European Union, member countries must all implement a set of LAWS (not rules, or guidelines) regarding the use of any data stored on a computer, including cookies. In the UK, these are described by the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011, which websites were required to implement in 2012.

Did the Daily Mail inform me that it was going to store these cookies?

No

Did the Daily Mail ask for my consent to store these cookies?

No

Did the Daily Mail provide any information about cookies on the page?

No

Did the Daiy mail provide a link to their privacy policy on the page?

Yes, in teeny-weeny text – the very last visible element on the page.

Did the Daily Mail offer me a chance to opt-out of accepting the cookies?

No

Is this a world record?

Maybe?



In the absence of any means to tell the Daily Mail I don't want their cookies via their website, I thought I would use the method built into my browser (although the cookie law does require that I should not have to jump through these hoops for compliance). So I enabled the do-not-track feature in Firefox deleted the cookies and cache, hit the reload button, waited a further 44 seconds (my ISP has transparent caching).....


Can you guess what happenned next?


All the cookies came back again.

The challenge

Do you know of a worse site than this for dumping cookies? Add a comment and a link to your analysis and I'll publish it.

Wednesday, 14 March 2012

Browser fingerprinting

I'm currently spending a significant amount of my time on fraud investigations at work. I've written some code which collates logs and transactional data then mashes it up to find patterns. It is accurately predicting the majority of the fraud. (I keep telling my boss I want to work on commission but so far I'm stuck with a salary). Although this is a huge leap forward from the position before I was involved, I'd like to reduce the remaining losses further.

My program relies heavily on IP addresses to identify individual client devices while the groups carrying out the fraud are mostly using mobile dongles or proxying via vulnerable webservers in an effort to obscure their identity. So I've been looking at alternative methods for identifying whom is at the far end of the wire.

(I should point out that in order to carry out a transaction on our system, the users must authenticate themselves therefore anonymity is not an issue for our legitimate users).

While evercookie looks to be ideal for our purposes, it's high profile means that our attackers may be specifically on the lookout for this - as well as the risk that it may be detected as malware by legitimate users with the right software. And despite the fact that our legitimate users have a thouroughly verified identity, I think undermining the security of their computers to be a step too far. The methods described in the Panopticlick project seem to be a more appropriate so I've been looking at these in some detail.

Which User Agent?

The obvious starting point is the user-agent. From the data I have already, the common user-agent suggests that I can link transactions from different IP addresses. They do change over time. Google Chrome is particularly troublesome - it appears to upgrade itself on the fly - even mid-session! And of course most browsers have tools for easily switching the user-agent reported in Javascript and in HTTP requests.

The only people publishing stats regarding faked user agents are, not surprisingly, people developing code in this area - and their sites are more likely to be visitied by technically sophisticated users, deliberately trying to test out the detection. I think it's reasonable to surmise that faking of user agents in the wild is relatively rare - so if I can reliably detect a faked user agent, knowing what the real user agent is does not help significantly with generating a unique fingerprint. A further consideration, is that even were this possible, sending the real user-agent back serverside increases the visibility of the fingerprinting process to an attacker.

It's worth noting that the navigator object has other properties / methods indicating the identity of the browser. Notably
appCodeName
appName
appVersion
platform

With the user-agent switcher on Safari, navigator.userAgent and navigator.appVersion matches the selected user agent in the switcher, and this is what is sent in the request. appName and appCodeName are always Netscape and Mozilla resp. However navigator.platform always reports win32, regardless.

With a Firefox user agent switcher, all the properties were changed.

Pedro Laguna provides a method for detecting the browser type by using the text of javascript exception messages. I'd previously found such an approach to be very effective when fingprinting SMTP servers, so I was optimistic that this could be used to detect most instances of UA faking. However although it works up to a point, it can produce nasty security warnings in some browsers and I had trouble accurately detecting MSIE v6 and Google Chrome. YMMV.

Robert Accettura has a nice writeup of the detection implemented in jquery, Prototype and
Yui which parse the user agent string while Mootols uses feature detection. The Mootools implementation only differentiates between different suppliers of browsers - not between versions of browser from the same supplier.

A paper by Mowery, Bogonerif, Yilek and Shacham describes a methodology for identifying browsers based on the javascript execution characterisitics. However they don't publish the exact code they used for their fingerprinting. I'm also sceptical of how effective the resolution would be on a wide variety of client machines running other applications concurrently - and without access to their code it'd be a lot of effort to test myself.

The engimatic Norbert proposes variations in Javascript parsing metrics (via arguments.callee.toString().length ) - this led me to some more specific articles on the subject, notably those by Bojan Zdrnja on SANS (1)(2)

Again it differentiates between parsing engines families rather than individual versions. Using a test script I got the following values:
115 - Safari 3
116 - Firefox 10
187 - Chrome 5, MSIE 6

Another approach is to simply look at what functionality is exposed by the javascript API. In general, developers usually add features - they rarely get retired. However this seems to be a fairly effective approach for detecting specific versions of browsers. These pages
have some more details on feature detection.(3)(4)(5)

So based on my research I used feature detection as the primary driver for my user agent checker, but did have to fall back on the Javascript parsing metrics to apply Firefox specific tests. My script includes the parser check, screen size / depth, language as well as the availability of selected APIs to contribute to the fingerprint.

Fonts
Several of the published documents / code make reference to fonts supplied as a good indicator of variability. Most use Flash or Java to get a list of the fonts. Interestingly both seem to provide an unsorted list - so the ordering is determined by where they appear on the disk - adding more unique behaviour. Not having a development toolkit for either, meant I had limited scope for testing this myself - but I did come across some code written by Lalit Patel . Lalit renders a fixed string using a degrading font-selector using different degraded fonts - if the size of the rendered string is the same, then the system must have the prefered font available - neato!

Taking this one step further, if I have a list of fonts to check for, then I can build up a list of what's available. Of course if I look for, say, Arial on a MS Windows platform, Helvetica on MacOS or Vera on Linux it's not going to tell me very much - but on www.codestyle.org I found lists showing the less common fonts. While looking for the most common fonts doesn't add a lot of variability, looking for very rare fonts adds code with little yield - so I created a list of the fonts lying in between these extremes for my code.

Plugins
On Firefox and webkit based browsers, navigator.plugins provides names and versions of the browser plugins (Adobe Acrobat, Flash, Java etc). Iterating through this is simple. Although MSIE has a plugins property in navigator it is not populated. In order to get information about a plugin you need to create an instance of it. And there is no standard API in ActiveX objects to get version information.

Eric Gerds has written some code for getting information about common plugins, however he doesn't reveal much about his methods - and trying to reverse engineer the obfuscated javascript is a bit of a task.

On the builtfromsource blog (author does not seem to provide any identity information) there are examples of how to detect/get version information from some of the more common ActiveX plugins.

While the Timezone Offset is available to javascript (e.g. +0100) the actual time zone (e.g. Europe/London) contains a lot more information; but the latter is not available to Javascript. Phil Taylor reports that different Time zones with the same offset can have different dates on which daylight saving time is switched on. However his method does require a lot of computation on the browser - approx 15k date calculations. There is some scope for optimizing this though (e.g. only looking at last weeks of March and October).

Josh Fraser offers a rewritten script for detecting both the timezone offset and whether DST applies for the current TZ .

Does it work?
I've mentioned I did some testing: I wrote a script using feature detection, arguments.callee.toString().length, font detection and plugin detection (i.e. specifically not using the user Agent string) and ran it on some computers at work.

Where I work the computers are all installed from standard builds. So far, I've got 23 fingerprints from 24 (supposedly identical) machines - i.e. I've only got 2 machines returning the same fingerprint.

I was concerned that instantiating the activeX objects would have an impact on performance and/or be otherwise visible to the user. On my test script, creating instances of Acrobat, Flash, Realplayer and MSWindows Media Player took 2-3 seconds - so not terribly intrusive. In one case, the user got a warning message regarding Acrobat (he'd not previously accessed any PDF files since the system had been installed, the other plugins did not produce any visible warnings). The time taken for the remainder of the javascript to run is negligible.

Where's my code? Sorry, don't want to make it too easy for the bad guys to see what I'm doing. If you follow the links I've provided you'll get the same functionality with just a little cutting and pasting.

Friday, 4 March 2011

UK Government website privacy abuse?

Anyone who knows me will not be surprised to hear that I think measuring user-experience and how users interact with your website is a very good idea. If you're in the business of trying to collect or analyse this information, then this post is addressed to you.

As I've often said, looking at the standard server-side logs can be very informative - but its only half the story. To get a better picture you need to go client-side. And that means Javascript. For many people / organisations, there just isn't the time or money to develop your own solution - and of course there are no end of vendors trying to flog their wares to you.

This post was prompted by a wasted hour investigating unusual patterns in referer stats. Where I work, phishing poses a very serious risk. Despite this, (and a large IT staff, dedicated security team. and an annual turnover well into the billions) there are no SPF records in our published DNS records! The referer stats for out customer facing website shows our logos appearing in lots of web-based email readers (including those from service providers who are known to validate SPF) - implying that it is more than just a risk. The is a shocking and absurd set of circumstances which I am still trying to resolve after 2 years.

However, that's not what this gripe is about.

This week I noticed a few referals from a very long URL starting with xxxxx.stcllctrs.com (where xxxxxx is the name of my employers parent organisation). The URL was not obviously an email reader. Dropping the URL into a browser returned a 200 response with no content. So I had a look at the root URL, http://xxxxx.stcllctrs.com/ Where I found the documentation for 'jsunpack' (http://jsunpack.jeek.org/dec/go) a tool 'designed for security researchers and computer professionals'. This is primarily a javascript code obfuscator. Interestingly, the URL for jsunpack seems to link to a form allowing people to report possible abuses of the tool - which has a record of its use at http://xxxxx.stcllctrs.com/ flagged as suspicious.

I then Googled for xxxxx.stcllctrs.com and found that our parents organisation had several references to this site, loading javascript files and NOSCRIPT content. Looking at the Javascript it was serving up, it was rather difficult to read (since it was obfuscated) but seemed to be doing strange things with cookies. The domain also appears in several ad blocking lists. Alarm bells started ringing!

Of course my employers make up for the quality of the security policy with the quantity of it - so I couldn't do proper whois lookup - but looking at tools on the web - this turned out to be a 16 bit subnet owned by Savvis.net. The name is registered with viatel.com. So both the netblock and DNS registration are effectively anonymous.

Obfuscated code, unusual URLs, cookie manipulation, anonymous hosting, greyware listings - DING DING DING!!!

Most of the whois services available online are provided by companies trying to sell registration services- the one I used initially did not provide any information about the registrant (and reformatted the content significantly so it looked like viatel was the registrant). But I eventually found another site (in Romania of all places!) which gave the registrant contact - speed-trap.com limited. This proved to be the Rosetta stone to unravelling what was really going on.

Speed-Trap appear to be a legitimate organisation providing web-usage monitoring services to companies. Surprisingly, they have a number of very high profile customers including direct.gov.uk, RBS, Axa and others. Yet they behave online like a script-kiddy - obfuscating their identity as well as code deployed to run in my browser, leaving other peoples hacking code
on their own website.

DirectGov have a link to their privacy policy on each and every page in their site (for the benefit of those from the colonies - DirectGov is the single, open access portal spanning all central government services in the UK). They clearly state they use javascript and cookies to record and analyse your usage of the site. They do not state that this information is processed by a third party. Indeed they go to unusual lengths to suggest that this information would only be shared with other bodies in extreme circumstances. RBS and http://www.axa.co.uk/privacy take a similar tack.

http://www.direct.gov.uk/en/SiteInformation/DG_020456
http://www.rbs.co.uk/global/f/privacy.ashx
http://www.axa.co.uk/privacy

From https://www.dephormation.org.uk/
"Intercepting, monitoring, eavesdropping, tapping communications requires legal authority, or consent from both parties to the communication."

Although there are some differences to BTs Phorm rollout (in that case, it was clear that Phorm were using the information for other purposes than just usage analysis) I find it very worrying that the UK government and several large financial institutions should be misleading their customers (or citizens) like this.