Wednesday 12 March 2014

Preparing for Home Schooling ICT part 7

Spring brings more home-school ICT. This term, we’ll focus on our Raspberry Pi, and build something really cool. blog.mindrocketnow.com

From the blog, I see that the last time we did any ICT was back in last summer. The DDs lost interest as there were more exciting things to do in their summer holidays than being inside with a computer. Then winter came, and we went into hibernation. But spring has now sprung, and as the days get longer, we’re doing more.

I’ve noticed that there’s now no difference between Early Years ICT (i.e. for DD2) and ICT for DD1. Both are equally adept at the basics of user input, understanding syntax and semantics, understanding the need to translate human intentions into machine language. So I’ve decided to run the same course for each. DD1 has a longer attention span than DD2, so I may do more sessions for one than the other. We’re still going to do one-on-one time; part of the attraction for them in doing ICT at home was to get my undivided attention.

I’m going to be basing this year’s syllabus on a single source: Adventures in Raspberry Pi by Carrie Ann Philbin. It is colourful, and the language is simple, so both DDs can read through for themselves. However, the concepts quickly get complex, so they will need guidance and encouragement as we progress through the book.

Let’s see how we progress. I’ll keep you posted.

Saturday 1 March 2014

Futuregazing, part 4: Better matching humans with tech.

Tech should become like us, not the other way round. That way, it’ll actually be what we want. blog.mindrocketnow.com

February is a bit late to make predictions for the year, even if it still feels like the year has only just started. (Actually, it’s March now – should’ve posted this yesterday!) Instead, this year I thought I’d write about advances I’d like to see in the industry. These will probably arrive later than 2014, but I think they’re necessary, and sooner rather than later. In this last entry I look at how we interact with TV technology.

Better human-technology interaction.
There’s a common characteristic of many technology-led innovation, that the way you use it requires you to read the manual. Most things aren’t as intuitive as the iPhone, and those that are, are truly disruptive. Improving the way we interact with the technology of watching TV will be disruptive in this industry.

I’ve written before about how the way we interact with the TV, using the remote control and the electronic programme guide, is hindering innovation. There are interesting solutions already: companion app EPG, Hillcrest lab’s remote control, using Kinect to create a “touch screen” experience for TVs, BBC Playlister. Let’s see what matures in 2014.

I was thinking about how else we interact with the TV, and it occurred to me that the most obvious interaction was the act of watching. Even though there continues to be great innovation in the way the bits and bites are put onto the screen, the way the image gets from the screen to our eyes hasn’t changed in a hundred years. We still rely upon how we perceive all those itty bitty red green and blue pixels (let’s conveniently forget Sharp's attempt to give us a fourth pixel). Better tuning the pixel wavelengths to our rods, cones and photosensitive ganglia will mean less cognitive friction translating displayed colours to actual colours, so more pleasing images.

The same principle can be applied to the CMOS sensor in the camera that captures the images, and also the encoding process itself. And for a double bonus, better matching means fewer “wasted” bits to describe the image, and so better compression.

I’ve also written before on the limitations inherent in how currently chose what to watch. Or to put another way, how we interact with the metadata. The separation of content discovery from content consumption better matches how we receive recommendations. At the moment, recommendations are far too contextualised: the EPG shows programmes that are carried by your service provider only, the “people like you have watched” recommendation limits selections to the on-demand catalogue provider.

But real-world recommendations, the ones that are made by actually talking to someone, pays no heed to which service you use, or how it’s negotiated licensing rights. When someone tells you that Sherlock was brilliant if overly obtuse, it doesn’t matter that you have a choice of watching the episode on iTunes or iPlayer. So to better match how humans make recommendations, we need to separate the recommendation from the viewing.

This approach has a double whammy also. There’s much less emphasis on usage pattern around the programme (categorising behaviour through collaborative filtering in a data set restricted by content catalogue), and much more on the nature of the programme itself. As a result, the recommendation will be better.

Finally, let’s turn our attention to the actual image. I’m no director, and I don’t purport to know how to frame the perfect shot. But there’s an interesting new tool that directors who do know these things, will be given with the introduction of Ultra HD. The much larger resolution greatly increases the field of view, so that a more natural image including wraparound peripheral vision is possible. And a more natural image is more immersive. I’ve personally seen this effect – never has footage of people crossing the road in Japan been so engrossing than at NHK's demonstration of Ultra HD at TV conferences.


TV has managed to be wildly successful despite being quite user-unfriendly. By improving the human interface, there’s less to get in the way of why it’s successful, the communication.


More in this series: part 3.