Effective assessment for the next generation of coders

With an ever-growing demand for workers with computer programming skills how can higher education ensure it teaches and assesses student coders effectively enough to prepare them for the 21st century workplace? LITE’Dr Craig A. Evans, and Dr Sam Wilson explain how their project to create an automated assessment platform that can provide useful feedback could be a step towards meeting the challenge ahead.

As the world’s reliance on technology continues to grow, there is an increasing demand for workers with the relevant skills who can enable businesses to thrive in the global digital economy.

Amongst these skills are computer programming. This has been particularly recognised in the UK where programming is now taught at primary schools.

FINDING a solution: Dr Sam Wilson, who is jointly developing a platform to effectively assess the work of coding students.

FINDING a solution: Dr Sam Wilson, who is jointly developing a platform to effectively assess the work of coding students.

Computer programming also features across a range of disciplines in higher education, most notably in STEM subjects.

Graduates of these subjects are increasingly expected to be competent programmers by the time they graduate and jobs requiring coding skills can be found in all sectors, not just science and engineering.

It is therefore extremely important for us to ensure our graduates have the chance to develop these skills to enable them to compete in the global jobs market.

Teaching Programming

We each teach several modules featuring programming in the School of Electronic and Electrical Engineering and the School of Computing at the University of Leeds.

They tend to be quite practical-based modules with students doing lots of practical exercises and writing lots of code in order to develop their programming knowledge and skills.

LITE project: Dr Craig Evans, who is jointly working to create effective assessment software for student programmers.

LITE project: Dr Craig Evans, who is jointly working to create effective assessment software for student programmers.

A lot of the knowledge required for programming is tacit and cannot be communicated by lecturing or even giving practical demonstrations.

It is just like learning to play the piano – you cannot learn by listening to a concert pianist tell you how they play or even watching them play, you have to practice, practice, practice until you start to develop your own skills and knowledge.

A teacher on these types of modules therefore has to take on the role of a facilitator; providing feedback and guiding students towards a solution rather than simply writing their code for them.

Challenges

Over the last few years, the number of students on modules has grown dramatically with 150+ becoming a norm.

This is particularly challenging for this type of teaching. With lecturing, it doesn’t hugely matter if you are lecturing to 20 or 200 students.

However, when you have hundreds of students writing lots of code every week and rightly expecting some guidance and feedback when they go wrong, it becomes very difficult.

Although it can be very rewarding sitting down with a student, going through their code, helping them to figure out where they have made a mistake, de-bugging code is a black hole.

Sometimes it takes 5 seconds, sometimes it takes 5 minutes and other times it may take 5 hours! It simply isn’t possible to sit down with every student and help them with every little problem they encounter.

In any case, this isn’t a good learning strategy as the student can become reliant on this kind of help instead of developing their own de-bugging skills.

A key part of being a competent programmer is having the ability to find and fix the mistakes you’ve made. Companies do not employ people to go around and fix code written by other employees.

So how can we support lots of students, often outside of contact time, to help them develop their programming skills?

Automated assessment and feedback 

To manually assess assignments and provide detailed, personalised feedback within the standard University turnaround time is essentially impossible  – this is evident from the low student satisfaction with assessment and feedback across the entire higher education sector.

Therefore, we both had independently developed our own software tools to automatically test student code and had used them in summative assessments.

These had proven very useful when it came to quickly marking hundreds of assignments and tests, but the feedback they provided was not always very useful.

The tools mainly test code functionality and as a result, typically work on a pass/fail basis. If the code fails the test, it doesn’t let the student know why it failed and hence how to learn from their mistakes.

We therefore decided to join forces on this LITE teaching enhancement project and work together to develop a platform that can automatically test student code and provide on-demand, detailed, personalised feedback.

Students will be able to submit their code to a web application which will run various tests on the code and provide feedback not only on functionality but also on code structure and quality.

In the event that the code does not function correctly, the platform also gives helpful feedback on where mistakes may have been made.

This will empower students to tackle formative learning exercises in their own time and will be a step-change in the amount of feedback they receive on their work.

READ: Teacher testers wanted for programming assessment tool

Roadmap

The platform is currently in development and we expect a beta version to be ready for testing during semester one, ready to be rolled out for use in teaching in semester two.

We’ve designed the platform to be flexible and plan to expand it to support multiple programming languages.

We’d love to hear from staff around the University who teach programming who are interested in trialling the platform or expanding it to support other languages.

Learning Analytics

An exciting, added bonus of the platform will be the potential of using learning analytics on the collected data.

Programming is an iterative process and currently we do not see the many iterations that the code may have gone through, just the final submitted version.

As students submit the different iterations for testing, we will be able to track the changes and observe the problem-solving strategies they have employed in order to arrive at their final solution.

This game-changing insight will be invaluable in learning how students actually learn how to code.

We will also be able to identify threshold concepts that students struggle with and be pro-active in providing additional support to assist students understand these concepts.