In the news recently has been the whole thing about not only the copy protection on HD-DVD and Blu-Ray disks being cracked but people posting digg links with decryption keys in them. I can understand Digg's position in removing said posts until the community kicked off and they then decided they'll go down with the ship if they got prosecuted. Hurrah for someone over there seeing sense.
One part of my brain always goes "Hooray for the hackers" whenever we hear stories about DRM being hacked in whatever guise it has been created. Another part of my brain, probably the more rational side I guess, does kick in afterwards and say that putting these things out in the wild will enable more software / media piracy and will incur costs for the companies that produce it which will make them either raise costs or step up counter-piracy methods. I never get to the "woe is me" stage like most media company execs do as they are truly multi-billion dollar organisations so it's hardly going to come out of the mail boy's pay cheque and they are unlikely to go bust.
What I do question properly though is the rationale that got us here in the first place. Since the 60s with tape-to-tape reels starting to replace vinyl records, music, film and software piracy has got bigger and bigger. What has happened though is nothing short of an arms race. Consistent through this entire arms race have been three key points:
1. That me, or anyone else, once they have bought a product has the right to play or use it for their own personal enjoyment whenever they see fit. This is the argument that most consumers will play - I might by a CD album but I want to play it on my MP3 player. I might buy a DVD but I want to play it on my Linux laptop as well as on my TV.
2. Companies that produce consumable media assume that anyone that wants to copy a product is inherently up to no good and they are now labelled as pirates and are probably taking the music / film / software and selling it in backstreet market stalls.
3. The profligacy of piracy is directly related to the first two points and how policed piracy is within the community.
One can directly see that paid for knock off copies of movies and music is completely against the law as you are selling someone's work and is tantamount to counterfeiting. However the framework for dealing with these people exists within the law and we are starting to see this go down.
The media companies will tell you that it's because of their anti-copy protection, however in reality it is because of better policing and it being viewed as being a black market operation and it having been historically a move away from "hard crimes" that has occured over the last 20 years.
This argument doesn't wash at all with consumers. Once I purchase a peice of media it is mine to use how I want on whatever device I want.
The barriers that are being put up by the media companies in their zero tolerance to consumers is assuring their position as the "big bad ogre" in all of this. Were they to engage with the consumers who are most likely to want to move content from one form to another they would probably be able to reach a solution.
Indeed were they to strip all DRM from their content altogether and then spend the money on producing better content or else supporting better policing they would probably turn a larger profit.
In the words of Nixon, "I am not a crook" - but I do want to watch Spiderman 3 when it comes out possibly on my TV from my XBOX, on my Linux Laptop, Windows Media centre and my PDA. At the moment I'll be lucky if one of those four work so I probably won't buy it at all.
Thursday, 3 May 2007
Tuesday, 1 May 2007
super computer required to simulate half a mouse brain
Scientists have published that the've used the IBM Blue Gene L supercomputer to simulate half of a typical mouse's brain. More accurately they've simulated about half the neurons and just over half the number of synaptic connections for 10 seconds - which because the simulation was running at about a tenth of normal speed showed about 1 second's worth of realtime information.
You can read the whole story here
Don't get me wrong, the guys at Navada Uni have my utmost respect. I studied a lot about cognition and neural networks when I was at Uni, in fact I specialised in it with a degree in Computer Science and Psychology so I know how hard this is to do.
What gets me going though is the reasons behind doing it. As can be seen here, the top Supercomputer in the world can be brought to its knees by modelling half a mouse brain for a very limited period of time. The reason for this is the sheer number of connections [synapses] that occur between neurons - a single neuron in a mouse can influence the behaviour of about 8,000 other neurons. It doesn't take long for the cascade to build up and your computations to start slowing down.
What I find most interesting is that Blue Gene is designed to simulate molecular interactions particularly associated with the degradation of US Nuclear Weapons but it grinds to a halt with half a mouse brain.
I must say though that when I was playing with this over 10 years ago we were talking about ant or fruit fly brains which are merely hundreds of neurons in size and our computers were falling over. Given that baseline, the achievement these guys have made is incredible, although using the most powerful computer on the planet just shows you how far we are from modelling a human brain.
Human brains typically have about a 100 billion neurons with many thousands of synapses. Rough estimates put the number of connections at about a quadrillion synapses which for those of you that like zeros looks like this: 1,000,000,000,000,000
Also to note was that when this is done typically one uses random assignation for where the synapses end up, it isn't a true model of how a brain works as there would be too much information to configure and would have to be done by hand. In these models the neurons are loaded into the system then randomly assigned a number of dendrites which randomly point to other neurons. You don't get real behaviour as in hearing and vision and the like but you do get a sense of how the flow of stimulus and response works.
My own conclusion from my studies and keeping abreast of the topic since leaving formal education behind is that small neural networks specialised to a particular task are more likely to have results than large scale applications like this. Even mother nature adopted this process as you can see in evolutionary history that old structures are built upon by new, more specialised ones - you only need to look at a reptilian brain and compare it with our own, particularly the basal ganglia cluster to see the similarities in structure and function. In pulling these structures together you can then start achieving something that is greater than the sum of its parts.
You can read the whole story here
Don't get me wrong, the guys at Navada Uni have my utmost respect. I studied a lot about cognition and neural networks when I was at Uni, in fact I specialised in it with a degree in Computer Science and Psychology so I know how hard this is to do.
What gets me going though is the reasons behind doing it. As can be seen here, the top Supercomputer in the world can be brought to its knees by modelling half a mouse brain for a very limited period of time. The reason for this is the sheer number of connections [synapses] that occur between neurons - a single neuron in a mouse can influence the behaviour of about 8,000 other neurons. It doesn't take long for the cascade to build up and your computations to start slowing down.
What I find most interesting is that Blue Gene is designed to simulate molecular interactions particularly associated with the degradation of US Nuclear Weapons but it grinds to a halt with half a mouse brain.
I must say though that when I was playing with this over 10 years ago we were talking about ant or fruit fly brains which are merely hundreds of neurons in size and our computers were falling over. Given that baseline, the achievement these guys have made is incredible, although using the most powerful computer on the planet just shows you how far we are from modelling a human brain.
Human brains typically have about a 100 billion neurons with many thousands of synapses. Rough estimates put the number of connections at about a quadrillion synapses which for those of you that like zeros looks like this: 1,000,000,000,000,000
Also to note was that when this is done typically one uses random assignation for where the synapses end up, it isn't a true model of how a brain works as there would be too much information to configure and would have to be done by hand. In these models the neurons are loaded into the system then randomly assigned a number of dendrites which randomly point to other neurons. You don't get real behaviour as in hearing and vision and the like but you do get a sense of how the flow of stimulus and response works.
My own conclusion from my studies and keeping abreast of the topic since leaving formal education behind is that small neural networks specialised to a particular task are more likely to have results than large scale applications like this. Even mother nature adopted this process as you can see in evolutionary history that old structures are built upon by new, more specialised ones - you only need to look at a reptilian brain and compare it with our own, particularly the basal ganglia cluster to see the similarities in structure and function. In pulling these structures together you can then start achieving something that is greater than the sum of its parts.
Subscribe to:
Posts (Atom)