This particular article was prompted by the (June 2012) news that the BBC was going to not only broadcast the 2012 Summer Olympics as normal through its TV channels, but also multicast it via the Internet. (Of course, that was only to UK residents, us US residents had to make do with NBC’s execrable coverage. Unless you knew about and had UK VPN access, cough, cough.) It prompted me to think about how streaming worked over the Internet and to explain it to the layman.
I started off however with Muzak: this relied on a patent by its owner to, er, stream background music to shops and offices from some central point. Despite this being in the 1920s, it was the direct precursor to what we might recognize as streaming: the patent was about the efficient transmission of data (OK, a frequency-modified signal) over wires.
For computers, the issue was all about the processing of data: not only did CPUs have to get fast enough to decompress video frame data at 24 frames per second, but the data bus to the video adapter and screen had to be able to cope with the several megabytes of data per second that the old 320x240 resolution required. And then we get into the compression algorithms (the codecs) to make the transport of the video data more efficient from server to PC. Once the stars had lined up – better frame compression using delta frames, greater bandwidth, faster CPUs (and GPUs, for that matter), better video adapters and monitors, higher resolutions – we arrive at today’s cultural life where it’s feasible to watch movies over the internet from NetFlix or LoveFilm, or even, bizarrely, your own recorded movies while you’re away from home.
All in all, a good introduction to the issues that people have had to solve in order to get to where we are today: on demand watching of cat videos on YouTube.
This article first appeared in issue 324, August 2012.
You can read the PDF here.
This is the last typeset PDF I have from my days of writing for PCPlus. For whatever reason -- work, travel, lack of ideas, I can’t remember – I didn’t write an article in early July for September 2012’s issue and so I didn’t appear. And then in August Alex, my editor, pinged me to tell me that October 2012 was to be the last issue of PCPlus, Future Publishing were closing it down, and would I like to write one final article? Like, by the following Monday? I decided to write a brief biographic article on Alan Turing (he would have been 100 in 2012) and the mathematical theorem he invented and proved to argue that, given a set of axioms and a statement in first-order logic, it was impossible to find an algorithm to prove the statement true or false. The theorem used hypothetical computers that became known as Turing machines.
Although I sent off the article and it appeared in the final issue, the magazine closed down, the staff were cast to the four winds (OK, they were pretty much reallocated to other publications) and I never got that final PDF. I may just publish the final article as a blog post when the time comes, although it wasn’t the best thing I wrote.
So this is it. Finis. There are no more PCPlus posts like this one for me to write. It turns out I wrote 70 articles for them over the course of five-plus years and only missed two issues during that time. Although I got into the routine of having a four-weekly deadline, it was sometimes difficult to come up with “theory”-type ideas to write about. But, overall, I enjoyed it all. I learned things and I like to think my readers did too.
(I used to write a monthly column for PCPlus, a computer news-views-n-reviews magazine in the UK, which sadly is no longer published. The column was called Theory Workshop and appeared in the Make It section of the magazine. When I signed up, my editor and the magazine were gracious enough to allow me to reprint the articles here after say a year or so.)