2011-11-30

Kevin Riley

IMS GLC: Kevin Riley:

Kevin was a great coach. I found him a great help in getting ideas about technical standards in learning out of the academic community and in to something that was relevant to the businesses that are creating tools and supporting the whole education community.


He was a colourful character, no doubt. It is no surprise that this memorial page contains warm memories of hats, braces and interesting dinner time conversation.


RIP

2011-10-14

Hashtags now work in Google+

A few weeks ago I posted about the tools you can use for linking Google+, Twitter and Facebook so that your public post propagates across all three services.  See Google+, Twitter and Facebook: all in one post

For me, the most useful thing about twitter is the ability to follow hashtags.  I love to keep up with the back-channel during talks at conferences and events I go to.  I also find that hashtags add a new dimension to live television, tags like #xfactor and #bgt generate an incredible volume of traffic providing ample distraction to cover the long advert breaks.  Although I've kicked the trashy telly habit this year I haven't had to give up following along with twitter.  The viewing public is great at self-organizing around hashtags and the UK's Channel 4 has even published official hashtags to use during political debates to quite good effect I think.

Anyway, if there is one barrier to my full adoption of Google+ it is that hashtags are not a feature of that platform.  Well, until yesterday that is.

I spent the first half of this week at my employer's European Users Conference where we published the hashtag #qmcon to help people follow along on twitter.  I used my mobile to check in at the hotel (in the Google+ sense) and used the hashtag in my comment.

Now when I review my Google+ stream the #qmcon hashtag is clickable, clicking it generates the above screen shot. You can also put the hashtag straight into the Google+ search box at the top of the page too.

Unfortunately there are business issues between Google and Twitter which means that a search in Google+ does not show results from twitter too.  However, there is no reason why people with accounts on multiple services shouldn't cross post using automated services so I'd expect the content to converge and for future competition to be around the experience of consuming these feeds.







2011-09-28

Safari woes: which bit of "Reset" didn't you understand?

As someone who works in software development, from time to time I have to use a clean browser to explore a technical problem. It's a pain to use my everyday browser for this because it means wiping all the identities on all my favourite websites so, like most developers, I keep one of my browsers for just this type of purpose.

In my case, the browser I choose to test with is Safari, partly because it has a neat "Reset Safari..." menu item that makes it easy to go back to a clean browser state at the start of the test.

Today, my world is upside down. The confidence I had in my ability to control access to my private information is shaken.

It started with iGoogle. Safari opened on my iGoogle home page as usual, I selected "Reset Safari..." with all the settings and was left staring at a plain iGoogle window inviting me to sign in. So far so good.

My next action was to open a new tab: my iGoogle page appeared again. Interestingly the Gmail, bookmarks, Google Reader gadgets and so on were all logged out but that new black strip at the top of the screen clearly said "Steve Lay" and clicking on Google plus revealed my latest posting including my private automatic check-ins. Somehow my identity had come back from the dead.

Perhaps this is a memory bug in Safari I thought, I saw some advice on the internet suggesting I should quit Safari just after reseting. This time my name was missing from the black strip on iGoogle, but clicking "+You" took me back to my Google+ home page again. My identity was back.

I had strange visions of the old Roger Moore movie, The Man Who Haunted Himself. Perhaps this is my other identity, may be this Steve Lay has checked into different places, posted different things?

The answer seems to be more mundane, though I'm a bit hazy on the technical details. It seems that the other identity that keeps breaking through into Safari is probably the Steve Lay that is logged in to Google with Chrome. You see, both Safari and Chrome use the WebKit framework to handle basic web protocols. In turn, Apple's implementation directs all WebKit based HTTP traffic through a low-level part of the system that handles Cookies for you. The upshot is, if I'm understanding the documentation correctly, that Cookies are shared between all WebKit applications.

That means that "Reset Safari..." probably doesn't do what you want. But, when you think about, it isn't just the reset function that is acting strangely here. I'm used to the idea that a Browser can't get a cookie until I've been to the originating website, and it can't identify me until I've logged in to the originating website. But none of this is true. Safari can find cookies and identities provided you have a logged in with any WebKit browser, which is quite different.

It is common these days for social networking sites to store badges on almost every web page which enable the network's owners to marry up your identity with that of the page you visited and hence get a better picture of your browsing habits. One way to reduce the amount of personal information you leak this way is to ensure you've logged out of services like Google and Facebook before you go reading up on that embarrassing medical condition or searching online for a divorce lawyer. When (or if) identities are going to routinely leak between browsers it is going to be much harder to prevent this type of information getting in to the wrong hands.

2011-09-14

Standards: now even more to choose from?

Two years ago, in my first post to Questionmark's blog (please note that I work for Questionmark) I wrote about licensing for open standards, speculating that the standards community might benefit by modelling itself on the open source community where standard licenses have emerged to simplify the legal landscape for developers.

Selecting a license isn't the only legal problem that faces an open source development community. How do I know that the code contributed to an open source project is really open? This is the software development world's version of dealing in stolen goods. If the true copyright holder doesn't consent to having their code contributed to the project then there will be trouble ahead.

This well known problem in the world of open source software has parallels in the world of open standards too. In fact, the use of the term submarine patent implies a more deliberate process of hiding IPR for the purpose of suing people later and submarine patents are certainly feared by people developing and implementing standards. [1]

If I want to start an open source project there is a wide range of hosting sites that I can use to help with the basic tools: source code repository, discussion group, wiki, download/distribution service, etc. In most cases, these tools provide a very fast way to get going with very little oversight. Contrast these basic services with the Apache Software Foundation. It is a bit slower to get going but in addition to the collaboration tools it also provides a basic legal framework within which the project will work.

Now imagine what this might look like for standards organizations and you'll have something very similar to Community Groups as launched last month by the W3C: W3C Launches Agile Track to Speed Web Innovation

It is early days for this type of service but a few things are clear from the press release. Firstly, if your community is international (and perhaps even if it isn't) this process may provide a better standards track than working with a national body (e.g., ANSI in the US or BSI in the UK). W3C claim that they can provide a clear path towards full standardization by ISO/IEC.

Secondly, W3C appears open to providing this part of their process as a service to specialist industry players. I've worked with a number of consortia in the Learning, Education and Training space and I've seen the amount of time that has been spent creating IPR frameworks. Being able to outsource this part of a consortium's work to W3C would undoubtedly have saved time (and in some cases significantly increased credibility). I'd encourage any specialist body to look seriously at this as an opportunity.

This type of specialization of function lowers the barriers to entry for new projects too. It would be naive to think that this initiative will not make it easier to create new specialist consortia to rival existing players. As a result, consumers of technical standards will probably have even more to choose from in future.

'via Blog this'

2011-09-09

Google+, Twitter and Facebook: all in one post

I'm not a prolific tweeter, my Google+ stream is more of a trickle (no age jokes please!) and I seem to have hit the wall on updating my Facebook status.

But each to their own: different people like consuming network updates through different tools.  So when I do feel like sharing something publicly it seems to make sense to update all three at the same time.

And now I can.

I've been using the Twitter application for Facebook for a while now but until the recent round of privacy updates in Facebook twitter posts were treated the same as wall posts from other friends, even though I had specifically linked my twitter account.  Now there's a new privacy option for integrations like this:



I've started a bit cautiously so if you really want to see my tweets in Facebook you'll have to be my friend first.  However, I have worked hard at setting up friend lists and associated permissions to simulate something a bit more like the circles that are so easy to set up on Google+ so you won't have to read about what my family and (other) friends are having for breakfast.

Which brings me to the second piece in the puzzle.  How to get public posts from my Google+ stream into twitter (and onwards into Facebook as above).

For this I'm using ManageFlitter.  This tool promises to take my public posts from Google+ (and only my public posts!) and repost them to twitter.  They say it might be a bit slow but that isn't going to be a problem for me I don't think and, anyway, when I tested it it seemed to update within the margin of error on a typical twitter refresh.



The icing on the cake is a Chrome extension (also available for Firefox I believe) which allows you to view your twitter streams right inside the Google+ interface.


The extension is called Google+Tweet.  I discovered this through the power of +1, for me this was the first time I'd done a Google search and found that one of the results was tagged with the face of someone I recognised (which I think means someone who is also in one of my Google+ circles).  A nice reminder that the way we use the web is still changing rapidly.



2011-09-05

HP Photosmart: nice screen, shame about the firmware

In an earlier post to this blog I talked about my use of dd-wrt to help me get my home wifi signal to span my house.  See Open Source Routers to the Rescue.  Well, I have finally got fed up with some of the flakey behaviour from my Virgin router.  Periodically it crashes and I have to cycle the power on both my new HP Photosmart B110 and the old HP Officejet 6310 to re-enable printing.
 
To recap, dd-wrt is open source router firmware with a host of well-documented options to allow you control your router hardware.  In the end I found wireless repeating (extending the range of my virgin hub) too unreliable and so I used it to configure my second router as a "Client Bridge" instead.  This allowed me to connect devices without wireless support by plugging them into the LAN ports on the back of the second router.  Client bridge mode is not compatible with the use of DHCP to automatically assign IP addresses.  According to the dd-wrt site this is actually a restriction of the Wifi protocol itself so I configured my home network with manually assigned IP addresses for most devices.

This was easy to do on my the old Officejet printer which has a low-res calculator style LCD screen and a fax-style numeric key pad.  But on my new HP Photosmart B110 printer, which comes with a flashy colour LCD panel and an icon-based interface (no touch screen though) the network settings can only be set through the built in web browser, so it must already be connected to a network.  This seems a bit dumb but I guess most wifi networks will support DHCP so less of a problem in practice.

I now own a third router, I have been planning to take advantage of something called WDS which is the proper way of joining two wireless networks.  With WDS you can extend wifi coverage and bridge the physical LAN ports in a way that fully supports DHCP.  (It is a pity that my Virginmedia router does not support WDS on its own, but given that WDS is not a proper standard yet mixing and matching suppliers is probably unwise anyway.)

I had been putting off the chore of reconfiguring the network but what better way to spend a wet Sunday afternoon?  I now have a normal DHCP-based home network again.  The two connected routers mean I have coverage throughout the whole house and some of the garden.  At last, I've been able to turn off the wifi beacon on the Virgin router and relegated it to little more than the cable modem it replaced.

Reconnecting all the devices to the new network was simple, until I got to the Photomart printer.  The network transformation involved a change of subnet so the static IP was now useless.  I went back through the wireless wizard to join the printer to the new network but the old IP settings persisted.  I then hit the 'reset to factory defaults' option in the menus - restarted the printer and then ran through the wizard again.  The printer still remembered the old static IP.  I even plugged the printer into my Mac via a USB cable and ran the printer utility but there was no way to affect the network settings that I could find.  In the end, I went back to my box of old routers and dug out an old Netgear wireless access point which I was finally able to use to contact the printer and change its IP settings back to automatic.

An option to reset to factory defaults should surely do what it says for all settings?  Now all I need is something like dd-wrt for my Photosmart printer...




2011-08-23

Do you <object> to <img>?

An interesting question in QTI history came up in discussion amongst the QTI IPS group recently and I thought I'd answer via my blog rather than email.

In QTI, you may use either the <img> element or the <object> element to include images in questions.  But in some cases only <object> will do.  In the words of the question put to me:

"The rule appears to be that declaration as img is used where the XHTML is to be passed on to (say) a browser for display, but declaration as an object is used where more complex handling is  required."

Why?


Both <object> and <img> are valid ways of including an image in XHTML - during spec development Safari would render them if you just pointed it at the QTI without any processing!

However, at the time of writing <object> was considered the better way to include images with <img> being the legacy method common in HTML files.  As with many efforts to regularize practice this hasn't really caught on as much as the HTML working group hoped!

Anyway, as a result of the above, where we wanted to accept an image and nothing else (i.e., in certain graphic interactions where issues like bounding rectangles, click-detection and the like needed to be simple and well defined) we chose the <object> form and not the <img> form.

This doesn't stop you putting images in runs of text in places like simpleChoice where layout is not critical.  And the width and height can be put on both <img> and <object> to hint at desired rendering size.  In both cases the attributes are optional as most rendering engines can use the default dimensions drawn from the media file itself.

But note that the <img> element has no way to specify the media type, which does place a burden on the rendering engine if it needs to sniff the image size because it will have to use file extension heuristics, magic bytes or similar to determine the type of file before it is able use it.  These are the little things that cause bugs: <object> requires the media type so it wins there too.

At first glance, <object> seems to lack the accessibility features of <img> as it has no alt text.  But it does provide a fairly rich way of handling platform limitations and accessibility needs, the rule being that the contents of the element should be used as a fallback if the object itself is not suitable.  There is an issue here with its use in QTI.  We chose <object> because we wanted to force people to use an image, not a text-flow.  In graphic questions drag-and-drop style renderings are common and this is much harder with random chunks of HTML.  But if the image is of a format that the browser does not support will a rendering agent use the fallback rules and what happens if they end up with a text flow anyway?  QTI is silent on this but I would not rely on any fallback content when <object> is used in graphical interactions for one of its special purposes.  If a delivery engine can't show the objects in a graphical question it should seek an alternative question, not an alternative rendering, in my opinion.

By the way, in the case of <gapImg> we explicitly added a label attribute alongside the <object> to make it clear that, if a label is given by the author it should be shown to all candidates and not treated as a more accessible alternative.  Which brings me on to the issue of alt text in assessment generally...

When I first left university I seriously considered working as a graphic artist, so speaking from this limited experience I feel entitled to say that however much you think of your drawing skills others may not recognize your 'dog', 'cat' or even your 'boa constrictor digesting an elephant' (Google it if you are curious).  If in doubt, a label is a good idea for everyone.



2011-07-17

Using gencodec to make a custom character mapping

One of the problems I face in the QTI migration tool is markup that looks like this:

<mattext>The circumference of a circle diameter 1 is given by the mathematical constant: </mattext>
<mattext charset="greek">p</mattext>

In XML the charset used in a document is detected according to various rules, starting from information available before the XML stream is parsed and culminating in the encoding declaration in the XML declaration at the top of the file:

<?xml version = "1.0" encoding = "UTF-8">

For this reason, the use of the charset parameter in QTI version 1 is of limited value, at best it might provide a hint on an appropriate font to use when rendering the element.  This is not a huge problem these days but when QTI v1 was written it was common for document renderings to be peppered with large squares indicating that the selected font had no glyph for the required character.  These days renderers are smarter about selecting default fonts enabling developers to display arbitrary unicode text.

So you would think that charset is redundant but there is one situation where we do need to take note: the symbol font. The problem is explained well in this article: Symbol font – Unicode alternatives for Greek and special characters in HTML.  The use of 'greek' in the QTI v1 examples is clearly intended to indicate use of the symbol font in a similar way - not the use of the 'greek' codepage in ISO-8859. The Symbol font is used a lot in older mathematical questions, you can play around with the codec on this neat little web page: Symbol font to Unicode converter.

According to the above article the unicode character representing the lower-case letter 'p', when rendered in the symbol font actually appears to the user like this: π - known as Greek small letter pi.

The problem for my Python script is that I need to map these characters to the target unicode forms before writing them out to the QTI version 2 file.   This is where the neat gencodec.py script comes in.  I don't know where this is documented other than in the gencodec source file itself.  But this is a very useful utility!

The synopsis of the tool is:

This script parses Unicode mapping files as available from the Unicode
site (ftp://ftp.unicode.org/Public/MAPPINGS/) and creates Python codecmodules from them.

So I downloaded the following mapping to a directory called 'codecs' on my laptop:

ftp://ftp.unicode.org/Public/MAPPINGS/VENDORS/APPLE/SYMBOL.TXT

Then I ran the gencodec script:

$ python gencodec.py codecs pyslet
converting SYMBOL.TXT to pysletsymbol.py and pysletsymbol.mapping

And confirmed that the mapping was working using the interpreter:

$ python
Python 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13) 
[GCC 4.0.1 (Apple Inc. build 5494)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> unicode('p','symbol')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
LookupError: unknown encoding: symbol
>>> import pysletsymbol
>>> reg=pysletsymbol.getregentry()
>>> import codecs
>>> def SymbolSearch(name):
...   if name=='symbol': return reg;
...   else: return None
... 
>>> codecs.register(SymbolSearch)
>>> unicode('p','symbol')
u'\u03c0'
>>> print unicode('p','symbol')
Ï€

In previous versions of the migration tool I didn't include symbol font mapping because I thought it would be too laborious to create the mapping.  I was wrong, future versions will do this mapping automatically.

2011-07-13

In Memoriam Claude Ostyn: Account Suspended

It is strangely saddening to have to mourn a professional colleague twice but today I noticed for the first time that one of my 'go to' sites for information on SCORM has finally also been laid to rest (this is perhaps old news).


Fortunately much of the information is preserved by archive.org but the strength of his voice seems strangely diminished there:



2011-07-11

Open Source Routers to the Rescue!

I spent a few happy hours this weekend fiddling with routers and stumbled across a great open source project called dd-wrt.  This one is not for the timid as there is a serious risk of bricking your router but if you are anything like me you have an old box gradually filling up with discarded routers you are keeping just in case you main router fails.

I recently switched tariffs with my broadband supplier, Virgin Media.  As a result, they replaced my cable modem with an all-in-one wireless-router/cable modem.  I can see that this simplifies their business for most customers but when you already had a good router before it is a pain when you realise that you are now tied to something with poorer range and and annoying tendency to make your printer panic.

Anyway, as a result I went and bought an ASUS device (RT-N12) capable of being used as a wireless repeater to improve coverage in those hard-to-reach corners of my small 3-bed semi.  Unfortunately, try as I might I could not get it to work as advertised.  At one point I actually got it to repeat the wifi signal from the Virgin router but the LAN ports were all on their own private network.  My attempts to fix the LAN ports broke wifi connectivity for everyone in the house and almost bricked the device.  I updated to the latest ASUS firmware but still no joy - I was beginning to get bored of the recovery mode procedure for restoring factory defaults.  There has to be a better way, surely?

Part of the problem is that the ASUS device has attempted to simplify the user interface of the router to an extreme making it very hard to fix issues.  The quality of the translation in the UI is very poor and the little help icon almost always brings up a little box with placeholder text explaining what the help function does, rather than actually providing any useful insight into the settings.  The one saving grace is that, with so few settings to control, one can quickly exhaust all possible combinations and conclude that home-networking is not for mere mortals.

But, masochists take note: there is even more fun you can have with these devices and you don't need to spend a penny more (the RT-N12 is available from £28 at 52 stores according to Google).

The software distributed by dd-wrt is a replacement firmware that actually works. Don't be scared off by the use of 'beta' or the frightening stories of bricked devices. I'm sure there is a reason why you are supposed to use the special ASUS firmware updating tool on Windows but I just flashed the firmware through Safari from my Mac and within a few minutes I was looking at a whole new user interface. This software is a breath of fresh air. It may be free but there is more documentation than the supplied firmware and being German the quality of the English is better than I could write myself.

The ASUS device clearly has software shortcomings but it isn't clear when linkingWifi devices which party is to blame. The market always seems to be getting ahead of the standards and even with the faster 802.11n mode it seems that devices can't agree which frequency to talk on and that in practice most networks are operating at b/g speeds for legacy reasons.

Routers like the Virgin hub are understandably configured for compatibility so most people seem to recommend turning off the wireless completely and then using one of the four LAN ports to plug in a completely separate wireless infrastructure to split new 'n' devices from the old b/g - now where is that box I keep all my old routers in...




2011-06-27

Amount of profanity in git commit messages per programming language

I spotted this blog page from a list I subscribe to the other day, those sensitive to profanity should look away now others can see the stats here...

Amount of profanity in git commit messages per programming language

Given that C# and Java are similar in many ways and are often used for the same things it is amusing that they both induce exactly equal levels of profanity in their developer communities.

The figures for different languages are significantly different (with C++ being the most sweary language to work in it seems) so I feel like this data is trying to tell us something.

And who are the nicest people to program with? PHP developers it seems (with Python not far behind).

2011-06-24

Visual C++ Redistributable Licensing: I'm just not seeing it

As part of putting together the latest builds of the QTI Migration tool I have had to repackage the updated tool into a new installer.

The migration tool is written in python and uses the py2exe tool to convert the Python script into a set of binaries that can be distributed to other Windows systems as a ready-to-run application without requiring Python (and various other packages, including wxPython: used for the GUI) to be installed first.

The output of py2exe is a folder containing the executable and all its supporting files ready to package up.  Originally this was all done by Pierre, my co-chair of the QTI working group.  I'm happy to report that updating the installation scripts went fine and I've been able to create a new Windows Installer using InnoSetup.

There is a recipe for using py2exe with wxPython published on pythonlibrary.org called "A py2exe tutorial".  However, I did have one problem with this recipe - I too had trouble with MSVCP90.dll but I needed the help of stackoverflow (thread: py2exe fails to generate an executable) to actually get the build going. Once done, I was concerned with the warning messages about the need to have a license to redistribute the DLL in my installer.  I found another blog post on distributing python apps for the windows platform which spelt out my options.  As I don't personally own a Visual Studio license it seems like I need to use the redistributable package which can be downloaded from Microsoft.

Unfortunately, when I download this file the license in the resulting installer does not appear compatible with packaging it into my installer for distribution with my tool.

Several people on the net seem to suggest that the DLL is off-limits but the 'redistributable' does exactly what it says on the tin.  Indeed, if you don't run the package it isn't clear what license you signed up to by downloading it but once you run the installer it clearly says that "You may make one backup copy of the software.  You may use it only to reinstall the software." and that you may not "publish the software for others to copy".  So I've played safe and am crossing my fingers that my users will have already installed these wretched DLLs on their system before they try the migration tool.

Previous versions of the migration tool installer were built by Pierre and he did have a Visual Studio license so could do the build and redistribute the software.

My experience and the time I wasted trying to find an answer to this question eventually turned up one discussion thread in which the complex issues that the team within Microsoft faces are exposed: see VC++ 2005 redistributable.  Although this thread is a little old now the replies from Nikola Dudar are helpful in providing deeper insight into the issue and the conflict that having a chargeable development platform creates.  On one hand Microsoft would like it to be easy for people to create software for their platform but they also have a paid-for development tool chain in Visual Studio.  The existence of Visual Studio Express edition (a free lightweight development environment) appears to be suitable only for personal hobbyists and not for anyone wanting to build software for redistribution.  There are lots of replies to the above article but if you search down for "release team" there is a reply that emphasises the difficulty of finding the balance between paid and express editions and a link to a blog post relating to the creation of the free to download redistributable packages.  I like these types of forum discussions as they show that even 'evil empires' like Microsoft are full of ordinary people just trying to do their jobs.

2011-06-17

Getting ready for HTML5: Accessibility in QTI, img and alt text

Last night I was playing around with David McKain and co's excellent MathAssessEngine site.

I tripped over an interesting error with some test data produced by the QTI migration tool.  I was converting a basic MCQ with four images used as the choices.  On loading my content package into MathAssessEngine I got four errors like this:

Error: Required attribute is not defined: alt

I went off in search of a bug in my XML generation code from the migration tool but discovered that what MathAssessEngine is really complaining about is an empty string being used as the alt text for an image.  Actually, empty alt text is allowed by the specification (my QTI v2 files validate) and it is also allowed by HTML4 so I think it is more a bug in the MathAssessEngine, but it did force me to go and check current thinking on this issue because it is so important for accessibility.

According to the current editor's draft of HTML5 the alt attribute "must be specified and its value must not be empty" so it looks like QTI-based tools will need to address this issue in the near future.

The problem with the QTI migration tool is that it only has old scrappy content to work with.  There isn't even the facility to put an alt attribute on QTI version 1.x's matimage which, incidentally, is another reason why the community should be moving to QTI version 2.

So is there any way to set the alt text automatically when migrating version 1 content?

One possibility is to use the label attribute on matimage as the alt text for the img element when converting to version 2.  The description of the label attribute in QTI version 1 is a 'content label' used for editing and searching.  This might be quite close to the purpose of alt for matimage because a text string used to find an image in a search might be a sensible string for use when the image cannot be displayed.  However,  editing sounds like something only the author would do so there is a risk that the label would be inappropriate for the candidate.  There is always the risk of spoiling the question, for example, if the label on an image contained the word "correct" then candidates that experienced the alt text instead of the image would be at a significant advantage!

Another common way to auto-generate alt text is to use the file name of the image, this is less likely to leak information as authors are more likely to figure that the file name might be visible to the candidate anyway.  Unfortunately, image file names are typically meaningless so it would be text for the sake of it and it might even confuse the candidate - especially if the names contained letters or numbers that might get confused with the controls: just imagine a speech interface reading a shuffled MCQ: "option A image B; option B image C;  option C image A" - now our poor alt text user is at a serious disadvantage.

Finally, adding a general word like 'image' is perhaps the safest thing and something I might experiment with in the future for the QTI Migration tool but clearly the word 'image' needs to be localized to the language used in the context of the image tag, otherwise it might also be distracting.  I don't have a suitable look-up table to hand.

So in conclusion, content converted from version 1 is always likely to need review for accessibility.  Also, my experience with the migration tool reaffirms my belief that developers of QTI 2 authoring tools should start enforcing the non-empty constraint on alt for compatibility now to get ready for HTML5.

2011-06-16

QTI on the Curriculum

01NPYPD - Linguaggi e Ambienti Multimediali: "qui"

It is amazing what you stumble upon when you use Google alerts. I was intrigued to see these course materials which include a 62-slide introduction to QTI version 2 alongside such esteemed subjects as HTML5, SVG and CSS.

The slides are in English by the way!


2011-06-07

Reports of Mono's Death are Greatly Exaggerated...

This post was provoked by fears that following the acquisition of Novell by Attachmate the Mono project faces an uncertain future.  I've documented my thoughts on the Java/C# schism and what it might mean for my attempts to get my own Python modules working in these environments.

Mono is an open source project that implements C# and the supporting .Net framework allowing this language to be used on platforms other than Microsoft's Windows operating systems.

The schism between C# and Java is (was?) very harmful in my opinion and represents a huge failure of the technology industry back in the 1990s when the key commercial players were unable or unwilling to reach an agreement over Java and Microsoft redirected their efforts to developing C#.  (Just imagine where we would be if the C language had suffered the same fate!)

Since then, Java programmers have smugly assumed that their code would always work on a wider variety of platforms and represented the more "open" choice.  I always felt that Mono did just enough to enable the C# community to retain its credibility even though it would be hard to argue that it was a more open choice, especially given the absence of any standard in versions 3 and 4 of the language.  However, Oracle's acquisition of Sun has created a sense of uncertainty in the Java community too.

In both cases it seems natural to use the word 'community' because programming languages do tend to foster a community of users who interact and share knowledge.  In the case of open source communities they also share code by contributing to the frameworks that support the language's users.  This latter point is critical to me, the Java community goes way beyond the core framework.  Java without the work of the Apache foundation would be significantly less useful for programming web applications.

That said, there is a new community of Java developers emerging because of its use on the Android mobile platform.  This programming community may share the same syntax but could easily become quite distinct.  In some ways it is a return to Java's roots.  Java was invented as a language for embedded devices where the types of programming errors C/C++ developers were making could be fatal.  The sandbox was a key part of this, ensuring a higher level of security for the system and protecting it from rogue applications.  These are just the qualities you need on a mobile phone or consumer electronics device where the cost of bricking your customers' favourite toys is an expensive repair and replace programme.  C# is also in this space, in this recent article on The Death of Mono notice that the knight in shining armour is driven by a mobile-based business case.

So if you want to use C# and .Net to develop web applications it seems to me that you are better sticking with Microsoft's technology stack and playing in that community because running your code on other platforms is likely to get harder, not easier.  And so the Java/C# schism lives on in the web app world.

Python and the Java/C# Schism

Given that the C# and Java communities seem to be a playing out an each to their own strategy it got me wondering about the Python community and how IronPython and Jython fit in.  Python started out as a scripting language implemented in C/C++.  There is typically no virtual machine or sandbox, it is just a pleasant and convenient way to spend a few days writing programs that you would have previously wasted years of your life trying to implement in C++.  The Python framework is a blend of modules written purely in Python with some bare-to-the-metal modules that wrap underlying C++ code.


Given that both Java and C# provide C/C++ like environments with the added safety of a sandbox and garbage collection, implementing Python on these platforms was a logical step and Jython (Python on Java) and IronPython (Python on C#) have even caused the word CPython to enter the vocabulary as the original Python interpreter.

In an earlier blog post I described my first steps with IronPython and described how previous attempts to implement PyAssess and my QTI migration tool had failed on Jython.  With hindsight, I shouldn't be too surprised to see that the IronPython developers have made the same decisions and that my code fails on IronPython for the same reasons it fails on Jython.  The technical issue I'm having is described in this discussion thread, which raises concerns of a schism in the Python community itself!

Actually, the trajectory of CPython towards Python 3 should solve this problem and Jython, IronPython and CPython should converge again on the unicode vs string issue, though when that will be is anyone's guess because Python 3 is not backwards compatible.  Not only will code need to be reviewed and, in some cases, rewritten but the conversion process will effectively fork most projects into separate source trees which will make maintenance tricky.

As with the Java and C# communities, the framework is just as important as the language and probably more so in defining the community.  Even if the basic language converges on the three platforms it seems likely that the C#/Java schism will mean that most projects for IronPython will exist as a more pleasant and convenient way of implementing a C#/.Net project rather than as a target platform for cross-platform projects.  For example, Python frameworks like wxPython (a GUI toolkit for desktop apps) rely on the commonality of an underlying framework (the C++ wxWindows) so are unlikely to emerge while the Java/C# schism remains.

2011-05-29

Snow Leopard, wxPython and py2app

As I write this blog post I'm happy to say that I have finally managed to get a new build of the QTIMigration tool on OSX, however this post is not about the migration tool so much as the process of getting the binaries built on a Snow Leopard-based machine.


The QTIMigration app runs in either GUI or command-line mode.  The GUI is based on wxPython which does not run well on 64-bit python builds.  The GUI part was written by Pierre and hasn't been changed in 3 years, in 2008 we had no trouble using py2app to bundle up a binary distribution.


In Snow Leopard, the default python interpreter runs in 64 bit mode.  It takes a bit of fiddling but it is relatively straightforward to check out the migration tool source and run it from the terminal forcing the interpreter into 32bit mode to satisfy wxPython.  I found this stackoverflow thread helpful in understanding the issue and ended up with a little script like this on my path:


#! /bin/bash
arch -i386 /usr/bin/python2.6 "$@"

I called the above script python32 (which seems dumb now that python3.2 is out) and it works well enough.

So to build the binary distribution, in theory, all I need to do is run py2app from the command line...

python32 setup.py py2app

Unfortunately, the resulting app fails when run with an unusual error about a missing attribute:

AttributeError: 'module' object has no attribute 'TickCount'

The error can be found on the system console.  As ever, someone has experienced the problem before (for example, see this post) but the real solution lies in the sage advice that the best way to run py2app is to use a standard python distribution from python.org and to ignore the one that came with the original OSX.  Furthermore, if you want to create applications that will run in 32bit mode you need to install the 32bit architecture version of python.

So I downloaded python-2.7.1-macosx10.3.dmg and installed it.  Fortunately there is no python2.7 on Snow Leopard so there is no problematic name clash to resolve.  I also installed setuptools by downloading setuptools-0.6c11-py2.7.egg (note that this uses whatever python python2.7 points to).  Then I installed wxPython from wxPython2.8-osx-unicode-2.8.12.0-universal-py2.7.dmg.  There doesn't seem to be a way of forcing the wxPython install to use a particular python but again, it seemed to find its way into my new python2.7 install without difficulty.  At last, I was ready to install the other modules needed by the migration script, including py2app.

To avoid confusion I extracted the tars manually and ran each of the add-in modules setup.py script using:

python2.7 setup.py install

This step included installing the new pyslet package I'm working on and its dependencies.

Finally, I was able to re-run the py2app package step using my new python environment:

python2.7 setup.py py2app

The resulting binary worked!  Although I no longer have an older Mac to properly test the compatibility of the new binary I could at least test it worked on a machine that hasn't had the custom python build applied.

I guess all this extra complexity has at least helped to test out the code a bit more thoroughly.  I was pleased that the unit tests for pyslet all ran fine on python2.7.  Unfortunately the migration tool itself has a bug when handling non-ascii file names.  This is because my python2.7 environment is now capable of using unicode strings for file names but at one point in the migration code I'm using the old urllib.pathname2url which chokes on my chinese examples (as they have chinese filenames).  I believe this behaviour is different from the built-in python on Snow Leopard, but either way there is no easy fix and it looks like I'll have to wrap or replace my use of this function before I can post the new OSX binaries.

Watch this space, I feel like I'm getting close to a new binary distribution now.

2011-05-27

International alarm rings over UK ICT policy

This is an interesting article drawn to my attention by SALTIS. The position taken by ISO/IEC and its advisors seems at odds with the benefits brought about by open standards from bodies such as the IETF (think 'internet' and 'world-wide-web').

International alarm rings over UK ICT policy - Public Sector IT: "promise"

Of course, failure to deliver on the open standards strategy is likely to reduce competition and drive up costs, but perhaps the strangest part of this is the monopoly governments give to the national standards bodies in the first place.  Perhaps simply opening up the world of standards itself to competition will be enough to bring about the changes required.

There are parallels with academic publishing here as it is not the authors getting rich from the charges levied on the purchase of standards documents: I love this article from the ACM in 1995: Standards, Free or Sold?.  ISO now lumps publications income with 'other services' in their financial statements and there is still no easy why of looking at the publication costs,  but clearly these could be reduced to a minimum by adopting royalty free re-distribution policies.   Either way, most of the funding required to develop the standard is donated by the sponsoring organizations or is supported indirectly by government programmes to help industry.  ISO's FAQ points out that publications income actually helps balance these larger interests but this argument seems obscure.

In my personal opinion, even if the UK government fails to compel standards to be made available openly this time around (as the EU did when it caved in over royalty free distribution earlier this year) the desired changes can still be brought about by encouraging the use of standards from de facto bodies like W3C, IETF  - after all, you are probably reading this article via http!

2011-04-04

Standards: so many to choose from!

It's an old joke that gets trotted out at most standards meetings but the truth is that most bits of technology are the result of hundreds, if not thousands of supporting components and systems.  Just image how many technical standards must apply to something like the average family car once you've taken into consideration all the components.

The world of software is no less complex.  I just had a look at the UK government's survey on use of open standards: the number of standards is not only bewildering but when I got to the section on education I realized that it is clearly still only the tip of the iceberg.  And I didn't spot HR-XML, SIF or PESC to name just a few.

Those with an interest in UK public sector procurement might want to hop along to the survey website

2011-04-01

My First Steps into the Iron Age

When PyAssess was originally being developed we did some experiments on getting it running in Jython.  Jython is an alternative implementation of the Python interpreter which runs inside a Java virtual machine.  Unfortunately, we'd relied fairly heavily on being able to distinguish regular ASCII strings and Unicode strings and this was not supported in Jython at the time.  I'm sure it has moved on since then but I haven't had a second go - and anyway, for Python 3 I'll need to sort the string/unicode issue out anyway.

C# programmers work in a similar environment to Java.  (As an aside, the sheer cost to the industry of Microsoft and Sun's failure to reach an agreement in those early days must be staggering.)  Not surprisingly, there is a C# equivalent to Jython, the project is called IronPython and I feel that it is about where Jython was when I was involved in my previous PyAssess experiments around 2003.

With my expectations set realistically I set about taking the first steps towards getting my latest python code running in the .Net environment using IronPython.

Installing IronPython and the associated toolset for Visual Studio 2010 went well and there was a useful walkthrough document to help me get started.  However, much of the documentation seems aimed at introducing experienced Windows developers to Python whereas I could have really done with something the other way around.  My first problem was that I'd installed Visual Studio 2010 with some type of Product Management profile and step 1 of the walkthrough involved selecting a menu option I didn't even have!  I couldn't figure out how to automatically reconfigure the menus in Visual Studio (even rerunning the installer) so had to go hunting for the "New Project..." menu item and add it to the File menu manually.  Still, when in Rome...

My simple "Hello World!" script went without a hitch but I ran into the following issue almost immediately: http://ironpython.codeplex.com/workitem/29077  - I ended up writing the following code which has to be used as a prefix to the first loaded python module in the project (and assumes you've installed your IronPython in the default location).

try:
    import string
except ImportError:
    import sys
    IRONPYTHON_PATH_FIX=['.', 'C:\\Windows\\system32', 
        'C:\\Program Files\\IronPython 2.7\\Lib',
        'C:\\Program Files\\IronPython 2.7\\DLLs',
        'C:\\Program Files\\IronPython 2.7',
        'C:\\Program Files\\IronPython 2.7\\lib\\site-packages']
    sys.path=sys.path+IRONPYTHON_PATH_FIX
    import string

As usual, the command window in Windows seems to dissappear before you've had a chance to read the output of your program but I did eventually get the following script working (with the above header of course):

import string, time
print string.join(['Hello','World!'],' ')
time.sleep(10)

In the traditional spirit of starting to run as soon as I'd taught myself to walk I checked out the latest python package code (pyslet) from the QTI migration project and installed it.  I was intrigued that IronPython has byte-compiling disabled but this doesn't seem to prevent the install from completing.

My next task was to check out the unittests and run them against the installed module.  At this point I tripped over my laces and fell flat in the mud: setuptools is not supported on IronPython and, therefore, the pkg_resoures module I use to check dependencies in the unittests is not available.

It is probably too much to expect a complex module like setuptools to work at this stage, I feel somewhat chastened by the realization that it isn't part of the main python distribution yet anyway!  This two-year old blog post suggests that problems getting zlib working are holding it back but the good news is that zlib is reported to be fixed in the latest release of IronPython (2.7) -- this was only released a couple of weeks ago and is one of the reasons why I'm looking at this environment now.

So although progress halted, I think I can work around the lack of pkg_resources.  I now plan to add exception handling to prevent it aborting the tests and have another go, at which point I'll post an update on progress to this blog.

2011-03-14

OAuth, Python and Basic LTI

On a recent long flight I was working on a Python script to act as a bridge between an IMS Basic LTI consumer and Questionmark Perception motivated by a rash claim that this was achievable given a suitably long flight away from other distractions.

The first part of the job (undertaken at Heathrow's Terminal 3) was to download the tools I would need.  The moodle on my laptop was still on 1.9.4 so I needed to upgrade before I could install the Basic LTI module for Moodle 1.9 and 2.  Despite the size of the downloads the 3G reception is great at Heathrow.

Basic LTI uses OAuth to establish trust between the Tool Consumer (Moodle in my case) and the Tool Provider (my script) so I needed to get a library to jump start support for OAuth 1.0 in Python.  Consensus on the web seems to be that the best modules are available from the Google Code project called, simply, 'oauth'.  The python module listed there is straightforward to use, even without a copy of the OAuth specification to hand.

Of course, these things never go quite as smoothly as you would like (and I'm not just talking about turbulence over Northern Canada).  I put together my BLTI module and hooked it up to Moodle but there were two critical problems to solve before I could make it work.

Firstly, BLTI uses tokenless authentication and the Python module has no method for verifying the validity of a tokenless request.  As a result, I had to dive in a bit deeper than I'd hoped.  Instead of calling the intended method: oauth_server.verify_request(oauth_request) I'm having to unpick that method and make a low-level call instead: oauth_server._check_signature(oauth_request, consumer, None) - the leading underscore is a hint that I might get into trouble with future updates to the oauth module.

Once I'd overcome that problem, I was disappointed to find that my tool provider still failed with a checksum validation error.  The tool consumer in Moodle was signing a request in a way that my module was unable to reproduce.  The BLTI launch call can take quite a few extra parameters and all of these variables need to put into the hash.  It's not quite a needle in a haystack but I looked nervously at my remaining battery power and wondered if I'd find the culprit in time.

The problem turns out to be a small bug in the server example distributed with the python oauth module.  The problem relates to the way the URL has to be incorporated into the hash.  (Section 9.1.2 of the OAuth spec)  The example server assumes that the path used by the HTTP client will be the full URL.  In other words, they assume an HTTP request like this:

POST http://tool.example.com/bltiprovider/lms.example.com HTTP/1.1
Host: tool.example.com
....other headers follow

In the example code, the oauth request is constructed by a sub-class of BaseHTTPRequestHandler like this:

oauth_request = oauth.OAuthRequest.from_request(self.command,
  self.path, headers=self.headers, query_string=postdata)


When I was testing with Moodle and Chrome my request was looking more like this:


POST /bltiprovider/lms.example.com HTTP/1.1
Host: tool.example.com

This resulted in a URL of "///bltiprovider/lms.example.com" being added to the hash.  Once the problem is identified it is fairly straight forward to use the urlparse module to identify the shorter form of request and recombine the host header and scheme to make the canonical URL.  I guess a real application is unlikely to use BaseHTTPRequestHandler so this probably isn't a big deal but I thought I'd blog the issue anyway because I was pleased that I found and fixed it before I had to sleep my MacBook.

2011-02-18

Debugging unsortable problems in Python

Working in Python 2.6.1 on my Mac I noticed the following behaviour recently while debugging the QTI migration code:

>>> 'z'<('a','b')
True
>>> ('a','b')<u'a'
True
>>> u'a'<'z'
True

These three comparisons, between a string, a tuple and a unicode string demonstrate that it is easily possible to create an unsortable list of objects out of basic immutable objects such as might be used as keys in a dictionary.

This might look a bit esoteric but I'm only writing this blog post because I caught a bug which was caused by the incorrect assumption that lists of strings, tuples and unicode strings sort predictably.  I was representing XML attribute names using tuples if an attribute had a defined namespace.  The names were then used as keys into a dictionary.  Note that both 'a' and u'a' can be used interchangeably in Python 2.6 when looking up an entry in a dictionary so it was easy to go one step further and grab the list of keys, sort them and assume that the result would be predictable.  Not so.

The order of the keys returned by the key() method of a dictionary is not defined and the sort method will return different results depending on the initial order of the resulting list.

It took me a while to find someone else struggling with a similar problem but I took great solace in Incomparable Abominations.   This blog post deals with changes from Python version 1 to version 2.

I believe that Python 3 is doing two things to address the problem I'm having.  Firstly, the sloppy lack of distinction between strings and unicode strings is being cleaned up.  The transition will be painful (and mean more work getting the QTI migration tool working on Python 3-based systems) but it will prevent the type of comparison loop above.  Comparisons are also being tightened to prevent different types comparing unpredictably, a (unicode) string and a tuple will not be comparable in future meaning I catch bugs like this one earlier.

So a better future awaits, but why do the comparisons give the results they do in Python 2?  The answer is almost poetic.  Objects of different types usually sort by their class name, the comparison of a string and a unicode string is the exception because, provided the string is 7-bit clean, it is assumed to be ascii and compared as a string of characters.  We can reveal the class names using the interpreter:

>>> 'z'.__class__.__name__
'str'
>>> ('a','b').__class__.__name__
'tuple'
>>> u'a'.__class__.__name__ 'unicode'

As you can see, the type names start with the alphabetic sequence 's','t','u'.

2011-02-03

Semantic Markup in HTML

A few days ago I spotted an interesting link from the BBC about the use of semantic markup.

This page got me thinking again about something I blogged about on my Questionmark blog too.  One of the problems we experienced during the development of QTI was the issue of 'presentation'.  In QTI, the presentation element refers to the structured material and interaction points that make up the question.  However, to many people the word 'presentation' means the form of markup used at the point the question is delivered.

I always found this confusion difficult.  Clearly people don't present the XML markup to candidates, so the real fear was the QTI would allow Question authors to specify things that should be left open until the method of presentation is known by the delivery system.

For some people, using a language like HTML implies that you have crossed this line.  But the BBC page on using HTML to hold semantic markup is heartening to me because I think that QTI would be better bound directly into HTML too.

HTML has been part of QTI since the early days (when you had to choose between HTML and RTF for marking up material).  With QTI version 2 we made the integration much closer.  However, XHTML was in its infancy and work to make the XHTML schema more flexible through use of XML Schema and modularisation of the data model was only just getting going.  As a result, QTI contains a clumsy profile of XHTML transformed into the QTI namespace itself.

In fact, XHTML and XML Schema have not played so well together and HTML5 takes the format in a new technical direction as far as the binding is concerned.  For QTI, this may become a block to the rapid building of innovative applications that are also standards compliant.

But bindings are much less important than information.  I always thought that QTI xml would be transformed directly into HTML for presentation by server-side scripts or, if required, by pre-processing with XSLT to make HTML-based packages.  That hasn't really happened, so I thought it might be harder than I thought.

However, I did a little research and have had no difficulty transforming the simple QTI XML examples from QTI version 2 into XHTML 5 and back again using data-* attributes to augment the basic HTML markup.  I'll post the transform I used if there is interest.  Please add a comment/reply to this post.

Steve

2011-01-22

Yes, I have started a blog!

OK, actually I already have a blog!

In fact, I have two.  Well sort of.

You can find out about my work at Questionmark from the main Questionmark blog and you can find more technical articles on my developer blog on Questionmark's developer site.

So why do I need another blog?

I created this blog so that I could blog about technical things that are not directly to do with my work at Questionmark but relate to my work on the QTI migraiton tool (qtimigration on Google code).  This project has been much neglected over the last few years and with renewed interest in QTI version 2, and a steady drip, drip of requests for changes I thought it was about time I did something about it.

The migration tool followed in the footsteps of PyAssess, which I worked on about 5 years ago with Alice while we were working at Cambridge Assessment.  Both PyAssess and the migration tool itself were important projects that helped to guide the specification development process I was involved in at IMS.  Earlier today I checked in the PyAssess source into a branch of the qtimigration repository, I don't have access to the old SVN repository so have had to abandon the history.  I'm also sad to say that the unit tests aren't all passing, but they stand more chance of being fixed once they're in source control again.

I still find myself dabbling in python from time to time and have continued to develop some of the modules used to help support and test implementations of standards for learning education and training.

I have also recently rediscovered the joys of wxPython.  I do most of my work on a Mac and struggled along with PyObjC for quite a while but recently upgraded my last Mac to Snow Leopard.  My beautiful Cocoa-based interfaces for my homespun python scripts stopped working and I spent a frustrating weekend trying to port them.  In the end, I gave up and rewrote the interfaces in wxPython!

wxPython is actually used in the Windows/GUI wrapper for the migration tool that Pierre Gorissen created.  His work was merged into the trunk with a bit of frantic hotel-lobby coding following an IMS meeting a few years ago.  I feel better equipped to maintain it now.