Deep thoughts from the Centre for Applied Computer Science @ The University of Bolton

Category: Cloud Computing

Boosting I/O for HDD

Seagate multi-actuator drive

Seagate’s multi actuator drive


There’s an interesting article in The Register about Seagate developing hard disks with two independent sets of read/write heads. When you think about it, this is long overdue. Currently an eight-platter drive will have sixteen read-write heads. They all move in concert so that reading from a particular location means that all the heads are positioned over the same track waiting for the desired sector to spin past.
Seagate’s new implementation splits the head mechanism into two groups. If the disk can read or write to two locations simultaneously that’s already a boost to I/O capacity. The Register speculates that other manufacturers will jump on this trend and go further with multiple read-write actuators.
Seagate’s own blog post explains this further. As ever, this new innovation puts our lecture material out of date. In week 11 of Introduction to Operating Systems Virtualisation our students have just learned that the read heads move in sync. That’s another lecture to re-write next year. Computing always evolves and we have to evolve with it.

Drinking the Kool-Aid

tub of grape flavour kool-aidOur American friends have a lovely expression: “Drinking the Kool-Aid”. If you’re not American (and most of us aren’t) this isn’t always easy to understand. Kool-Aid is a relatively cheap powdered soft drink. The phrase refers to the 1978 Jonestown deaths of followers of the People’s Temple in a murder/suicide where the drink was mixed with poison.
Drinking the Kool-Aid is taken to mean an act of faith of a true believer. Typically this also means going along with a dangerous or doomed course of action. This is a bit grim, what has this got to do with Computing at Bolton?
Our new Computer Networks and Security course has a greater emphasis on the Linux operating system that the earlier course of the same name. Along with the Cloud Computing courses we have linked up with Red Hat – a leading version of the Linux operating system. We will be the first UK university to integrate the Red Hat Academy curriculum into our degree programmes. Students will be able to gain Red Hat certifications as part of their studies (as well as the Cisco Academy certifications we already offered).
Over the summer we are transitioning one of our labs to be Linux only. In an effort to be a true believer, I’m going to develop my course material for the Cld4002 module using Linux.
Of the Linux web pages I visited, only Debian had the Terry Pratchett header present on their site.
Which Linux though? As you know there are squillions of different Linux distributions out there. One of the strengths of the open source movement is the idea that anyone can take the source code of an existing project and use it as the starting point for a new project (known as a fork). There are loads of articles suggesting which Linux to choose. There are two main families: Debian and Red Hat. Popular distros like ubuntu are derived from Debian. Fedora and Centos are based on Red Hat. As we are following the Red Hat academy, I’ve gone for Centos at home. (Interestingly only the Debian site has the GNU-Terry-Pratchett header present).
The install was pretty easy, but then I’m not a beginner. If you have never done it before you might find it daunting. The main choice I had to make was the type of initial system I wanted. Normally I would opt for a server system to run web and/or database; this time I need a desktop system instead to develop teaching material. On Bren’s advice I went for the Gnome Desktop option. I left the computer chuntering away to itself and went to watch TV. (My broadband is slow at home and this was a network install).
Once the install was done it was time to do the usual routine:

  • Install a browser
  • Install office Libre Office is already installed. Not a 10 day evaluation, a full package.
  • Add Google Drive
  • Add my printer

Although Firefox was already installed, I prefer Chrome so after finding some instructions I was able to install it fairly quickly. A totally free Office package was already there. I’ll write more about the office tools in a future post.
Adding Google Drive was a bit more effort. I found a clear enough set of instructions, but it was a fairly complex setup which might be a problem for a newbie. I came unstuck with my printer though. The printer is an old HP Laserjet which I picked up for £30 on eBay a few years ago. It is rock-solid reliable and I would much rather shell out £25 for a toner cartridge once a year than the monthly grind of inkjet cartridges. The printer hangs off my Mac using CUPS and my Linux system should be able to use it. I was able to find the link and add the printer to my Linux host. So far, so good until I tried to print. Nothing happened, and when I looked at the error log I got a message about a filter problem.
Right: time to google the error message and solve the problem. I held off though because I know what kind of rabbit hole this problem can descend into. It could be a Mac problem, a cups problem, a Linux problem or an issue with the printer driver. The chances are that this could take a couple of frustrating hours to fix, and as I didn’t need to print anything now I left it for another day.
After an hour’s fiddling about, I was able to get down to work and write something. Now what on earth do I want to write?

Voice… The Final Frontier?

Talking to the computer
Do you feel awkward talking to the computer? How about in front of other people? Voice control is in its infancy really. Speech recognition is getting better. By utilising cloud-based services the big players are able to parse speech and recognise what is being said very well. Certainly your mobile can understand phrases like “navigate to work” or “call Aunt Jemima“. Well it can if you use Google Voice or Siri; Samsung’s Bixby assistant still can’t understand English.

Skills and vocabulary

What we say to dogs - the Far Side

Far Side Cartoon (c) Gary Larson


Sadly your digital assistant does not really understand you! It has a limited instruction set which you learn how to use. Your device has a vocabulary of keywords such as “call” and “install”, if you go beyond that limited range there’s no understanding. Sure the designers are clever, with the cute Easter eggs built into their systems. Try asking Siri “what’s your favourite colour?” or “what are you wearing?” and you get a clever canned response. The key word is canned, the system does not understand speech.
This shortcoming is the next step that the companies need to fix. The key is to recruit external developers. There are parallels with the iPhone. When it launched in 2007 you could only run the applications Apple shipped. The App Store was announced the following year and now hosts millions of apps.
Amazon are following a similar path with Alexa. Third party developers can create their own ‘skills’ for Alexa. As this Wired article states: they recently reached the milestone of 10,000 skills. This is an increase from 135 skills in 2015.
Skills are the element which can propel voice control from the limited state we have today to something truly useful. Accurate speech recognition plus the utility of skills will give us something that is a real breakthrough technology.

The Wired article quotes Amazon “We had this inspiration of the Star Trek computer,” says Steve Rabuchin, who heads up Alexa voice services and skills at Amazon. “What would it be like if we could create a voice assistant out of the cloud that you could just talk to naturally, that could control things around you, that could do things for you, that could get you information?”

Maybe Star Trek is the future after all.

© 2024 UoB Computing

Theme by Anders NorénUp ↑