Saturday, 22 December 2007

Potent messages of impotent industries

I should probably know better than to open my mouth but the obvious has to be stated on this one. For anyone that is netSavvy enough to know what BitTorrent is, the news that TorrentSpy has just lost its court case against the MPAA isn't exactly surprising.

Hearking back all the way to Naptser we seem to have an annual tag teaming of court cases brought about by the RIAA and the MPAA in order to bring these "nasty pirate companies" such as TorrentSpy to heel.

Sites documenting the ins and outs of the case are plentiful so I won't go into detail. (For more info see the BBC report as it's quite neutral)

After every one of these cases new technologies spring up to either to protect people's privacy better or make the technology better (Naptser giving way to Kazaa and others which gave way to the BitTorrent protocol).

The recording and movie industries are worried because they are no longer the gate keepers to content and can charge what they like for it. As such the "dirty pirates" must be prosecuted even if they are, as in TorrentSpy's case, nothing more than a pointer to where the content is being held.

The great amusement in this particular case is that the only reason the MPAA "won" in this instance is because of TorrentSpy's refusal to provide the tracker and user data because this was a breach of Dutch Data Protection laws. As such the MPAA won by default.

Had this truly been a court case, it would have come to light that TorrentSpy provide a framework for people to post tracker data about any files they have on their machines and indeed they don't have copies of any of the physical files. The MPAA probably would have still had them closed down but their legal case was always going to be shaky.

So TorrentSpy will be closed, they will be bankrupted but there will be a dozen smaller companies waiting in the wings to see if they can bleed the MPAA that little bit drier.

You see the big problem here is that the MPAA can't let up now. It doesn't have the mechanics in place to distribute online properly (unlike same music where iTunes and others provide the service) not least because of the antiquated territorial boundaries films get sold by.

As such we'll be seeing another legal case next year - maybe ISOHunt will be next - and another company collapsed but then dozens more set up for a brief stab at providing content to the people.

The quote from the MPAA spokesman is great:

"The court's decision... sends a potent message to future defendants that this egregious behaviour will not be tolerated by the judicial system," John Malcolm, the MPAA's executive vice president and director of worldwide anti-piracy operations, said in a statement.

"The sole purpose of TorrentSpy and sites like it is to facilitate and promote the unlawful dissemination of copyrighted content. TorrentSpy is a one-stop shop for copyright infringement."


What's most amusing is that according to many sources, music being downloaded from "official" sources is almost as much as that being downloaded illegally. Surprising how given the tools, a cessation of hostility towards the users and a price point that accurately reflects the product being sold and the consumer comes to the party once again.

The MPAA still has a lot to learn about the Internet - one wonders how much it will cost them in legal fees in the mean time.

Friday, 21 December 2007

My top 5 jQuery seasonal wishes

I've waxed lyrical about jQuery before, I've been using it a lot to do worker code which I just can't be bothered to hand write any more. Not least because jQuery handles all the little browser inconsistencies for me so the code I actually call into a page is infinitely more maintainable, especially if someone follows behind who maybe isn't so up to speed with JavaScript as I am.

However, use a tool for long enough and closeness breeds contempt as they say. In this vein (and regular readers will know I don't do complimentary very often) and in the spirit of seasonal "Listing programmes" of every style, these would be the top 5 things I'd like to see incorporated into jQuery in the next year.

5. Documentation - Starting off slowly and easily I'd definitely like to see some better documentation. Ideally I'd like to say that new sublibraries aren't included until their documentation is properly up to scratch. Some areas are very well documented other areas are sketchy at best.

4. Wait(msecs, callback) - part of the effects sublibrary, we have all kinds of effects to enable objects to slide, fade and animate but we don't have a wait command. What I would give to have a command that you can just append to a sequence of animations and then wait for a period of time before calling another function or stepping to the next instruction.

As you can see from my jQuery Slideshow the common way to do this is to call animate() with the same instruction as your last step with a callback. It's not big or clever but it does the job.

3. fadeToggle(speed) - again part of the effects sublibrary; we have slideToggle which is a great bit of code, call it and the object either slides open or shut depending on it's state. It would be great to have the same thing with fade rather than writing detection code and then calling fadeIn or fadeOut.

2. State detection - Another worker function would be really useful here to actually determine the state of an object as to whether it is on or off in display terms. I am fully aware I can use document.getElementById(objname).style.display or equally $().css.display() however this will return "none" if it's off, but it could also return "block inline table table-cell list" etc depending on what it is.

Ideally I'd like $().displayState() and it would return "on or off" or indeed true or false as a boolean so it would make display code even easier logic wise.

And finally,

1. Cast to DOM object. One of the best things about jQuery is it's query language. Using elements from the CSS and Xpath specifications pulling objects out of the document is so much easier than using DOM traversal methods.

However sometimes the jQuery functions just aren't enough and we need to cast an object to real JavaScript to play with it - a simple method of doing this would mean the power of a great interrogation language along with the ability to cast to a real DOM object.

I fully expect someone to come kick me now telling me I can do some or all of these things and indeed the functions I'm asking for exist aleady however the documentation as mentioned in number 5 is lacking in some areas so it isn't obvious if it is doable.

Obviously this is a little toungue-in-cheek as if I was that worried about these issues I'd write the code myself and submit it to the team for inclusion in the next version. Indeed perhaps that could form the basis of one of my New Year's technology resolutions.

Happy Holidays all.

Wednesday, 19 December 2007

SMS Bamboozlement...

I'm doing some work for a client at the moment who's industry is particularly technophobic. The absolute cutting edge is a bit of YouTube video thrown willy nilly into a page. I'd also point out that design is something that rarely makes an appearance in this particular industry.

So it was pretty refreshing when we went to them with a series of ideas from the more commercial sectors of New Media and one of the things they latched onto was SMS. Queue annoyance though when we had already got everything ready to go other than to push the big green "launch" button and another company got involved and started talking about location aware services and high end data capture etc.

At this point the client dissolved into a mess of indecision - "Why weren't we doing all of this?" was the question, to which the answer was "Because you don't need to - primarily because your text messaging service is built around raising revenue through donations!"

I've had this happen in the past, notably with SEO companies. I do pity the poor clients who get stuck in these situations where they've finally decided to push their technology base along but then get waylaid by all the glittery, flashing and hypnotic LEDs.

At the end of the day it is important to remember why you are doing something and not get sidetracked (and not get ripped off). Once a strong foundation of technology is laid there is always something new you can build - you don't have to have every shiny present under the tree to have a great christmas.

Tuesday, 11 December 2007

.NET / XSLT and how to import an external XML document

I work with XML and XSLT every day of the week. Indeed working for a company called XML Infinity you can imagine how much we use it. I had one of those incredibly frustrating moments this afternoon that one typically when dealing with badly documented parts of .NET or XSLT.

The annoyance in question was to do with loading a document in to an XSL template on the fly. 99.9% of the time you don't bother with this as you have a master XML document which you transform according to the XSL template that is assigned to it. All your XML processing is usually done before you get to this point.

There is an xsl function though called document() which you can use to load in an external XML doc to the XSL template and then do work on it. I've used this before but the damn thing wouldn't work. Why not? Because our Transformation Engine wasn't using a loose enough resolver to be able to deal with externally referenced files... grrr. I know why MS did this because it's so the parsing engine doesn't go loading every document under the sun and potentially crashing.

That's great but they could have documented it a bit better.

The resolution by the way is to create an XmlUrlResolver, give it some credentials (in my case setting it to DefaultCredentials which allows you to access http::, file:: and https:: protocols) and then pass that into your Transform() method.

Job done.

Not quite.

Having finally been given access to an external XML document I then had to contend with XSL's arcane methods of dealing with XML fragments. Again documentation was the issue here.

Looking online there are some ridiculously complex ways of parsing an external document when by rights it should be as simple as just dropping the doc in a variable and then processing according to the variable. People were using recursive templates using xsl:copy and all kinds of things.

Turns out the way to do it is a little known second parameter.

If you do this:

<xsl:variable name="var1" select="document('http://example.com/file.xml')"/>

All you'll end up with is the text nodes. Not very useful.

If you do this, however (note the second parameter):

<xsl:variable name="var1" select="document('http://example.com/file.xml', /)"/>

You'll end up with a full fledged XML document complete with nodes and everything put into your $var1 variable and you can then use it to select data according to standard XPATH constructs.

If you don't want the whole document you can pass the second argument as an XPATH query and it will just return that nodeset - much easier to deal with.

In all the time I've been dealing with XML / XSL I didn't know about this and it was a great pain to figure out. Typically the only reason I was doing this was to mock something up for a client quickly and it then turned into a mammoth effort. Knowing now though will save time subsequently I guess...

Saturday, 1 December 2007

PCI DSS will wreak havoc on SMEs

One of my clients was asking me about PCI DSS certification today. Coincidentally I also received our letter about compulsory compliance to the PCI DSS standard.

Both of us are what are termed "Level 4 Merchants" - that is we process less than 20,000 card transactions through the company in a year. Arguably Level 4 Merchants will probably account for the largest number of business globally as they will incorporate pretty much every SME in PCI compliant countries that takes a card as a form of payment (according to Visa about 27 million businesses).

The standard itself is a worthy document - a dozen set in stone compliancy rules to which businesses have to adhere. Most of it is common sense like settin your password on your router to something non-default, make sure card details are encrypted if they are to be stored, that sort of thing. Most businesses in the SME world would, in fact, actually be compliant - mostly because they don't store data.

Here's the rub though. Barclaycard sent both my client and I a letter basically saying you have two options on compliance: First you do it yourself or otherwise you get someone to help you (and of course they recommend a company SecurityMetrics to help you do it all - at a discounted rate of course).

Obviously the first thing I did was go to the security metrics site and request a quote. As a Level 4 Merchant it will cost me merely $699 per year to be assessed quarterly. However they can tell me do do things to get me up to spec which is then going to cost me more again. At the end of it they give me a pass or fail certification and their audit is completely subjective.

After that I went and downloaded the whole specification and read it through twice. Every point I made a note against.

Typically, this isn't a document for the feint of heart. I'm lucky first in that I'm a techie and second that I did my formative programming years in a bank specialising in what was then the forerunner of InfoSec. There is not a single line of "plain english" in the whole thing.

A couple of non-techies I've shown it to got about a page in before giving up. Your average 1-5 employee company owner doesn't have a hope. Thus he'll end up paying $699 per year for what is essentially insurance.

Even amongst Level 1 Merchants, understanding and compliance are two different things as you can see on Evan Schuman's great article about recent stats to come out of the Level 1 camp.

Big companies have the resources to deal with this sort of stuff and they are also more likely to be saving data on customers so for them it is crucial. Whilst no less crucial for small businesses, the fact that a store owner who only takes card payments for people when they are physically in his shop will still have to go through this audit is patently ridiculous.

BarclayCard are indemnifying themselves by playing the FUD card with comments like:


To date these penalties have not been passed on to any Level 4 Merchants, but from 30th April 2008 your business will be liable for PCI DSS penalty charges and costs associated if you fail to comply or have a data compromise.

Penalty charges can be considerable (in excess of £100,000) so, to protect your business, it is vital that your prepare for PCI DSS compliance by 30th April 2008 and continue to maintain compliance in the future.
What the PCI DSS standard fails to deal with however is systematic failure of employee behaviour. It doesn't deal with issues such as people skimming cards if they are taken out of sight nor does it deal with employees writing details down on a piece of paper and passing them on when dealing with mail order, nor does it deal with phishing scams.

Indeed I had a card machine problem last week and the support officer at BarclayCard stated:


Just write the details down on a piece of paper and process them later
Hardly a piece of advice that should be followed to maintain security.

In the end businesses will have to make their own mind up about how to best deal with this new "virtual legislation" that is being thrust upon us. To me the whole thing reeks of the rise of the SEO industry piggybacking off Google's search technology.

In reality the biggest source of credit card fraud is that caused by skimming details through offline processes such as mail order (which I had done to me recently and my bank caught it on the other end within a day) or else identity theft whereby a new card is created in someone else's name.

None of the procedures outlined by the PCI DSS standard deal with these very real and growing issues - all they are doing are lining the pockets of consultant sharks that will feed on the SMEs who don't know any better and penalising the merchants for actually trying to conduct business.