/ teaching

Building knowledge to give knowledge - Part 2

There’s an unwritten rule about sequels, stating that they’re not as good as the original... will I be able to break this curse? I’ll let you be the judge of that!
(If you haven’t read the first part, here’s the link.)

Working with the unexpected, from the beginning...

One of the aspects of the course, that I regarded as something positive on the previous post, is the need of having a plan and a clear understanding of the subjects to be explained.

But as usual, everything changes. The initial assumptions derived from the provided documentation had to be modified. At the moment of validating the plan with the client, more topics were added to the agenda, and it soon became much more ambitious. The scope changed, the subjects changed. The plan changed.

Since the course was mainly focused on a hypothetically advanced group of students and the agenda was already tight, I had to reorganize the material to match the schedule. My guess is that only 20% of what I had prepared could be reused.

I had to tighten up the schedule even more, adding examples and exercises for a more complex level. I decided I’d rather guide less experienced students to a proposed solution instead of waiting for them to finish every single exercise by themselves.

Thus, my goal turned into giving a general idea of every topic, pushing the students to solve 70% of the provided exercises, so they could get the whole picture.

This is how things went down.

The unfolding of events

The audience

The course began with high expectations for how the activity was going to develop, and how the students would react to it.

Based on a scale from 1 to 10, with 1 being “The topics are unfamiliar, so the course will be really challenging”, and 10 being “The student already knows Python and most of the topics, so the course might be boring”, we did an estimation with the client to determine the general level of the audience:

  • 80% of the participants had an intermediate level (5pts avg).
  • The other 20% had a really basic knowledge (2pts avg).

But from my point of view, and taking into account feedback received after giving the course, the real scenario was:

  • Only 10% was close to 5pts.
  • The other 90% was between 2 to 3pts.

If you think this made things even more complicated, you are right. It did require more effort than expected to reach the goals we had in mind. Nonetheless, the results were good, mainly because of the collaborative effort that the students put into learning as much as possible.

Audience Receptivity

The audience had a positive attitude, receptive and willing to learn. I soon noticed that they were putting a lot of effort into following the demanding 8hs per day, for 5 days, that constituted the course.

Given the high level of complexity, we tried slowing down the pace a little bit to prevent leaving anyone behind. But in spite of that, some individuals still could not keep up.
One thing that really helped was applying “Pair Programming”, so all students could get a grasp on how to solve each problem by pairing with a more experienced colleague.

Becoming a good developer undeniably requires a lot of energy, even when you’ve got the right set of tools and education at hand. As I used to say during the course: “I can show you how to use a treadmill, but if you don’t start running, at least slow and steady, you’ll never improve”. Luckily, they all got it and did their best.

Tests Receptivity

When I wrote the first post, I mentioned I started building the course program around the idea of writing tests. I expected I’d have a whole audience with just a basic knowledge on the subject, so new ideas would land on fertile land. But since the real context was far from ideal what eventually happened was that students developed mainly two kinds of attitude: those who supported the idea of developing using tests, and those who didn’t.

The former had applied such a technique before, or at least heard of it, so they could really see its value when it came to developing software. Most of the latter had a sysadmin background and were used to learn without writing tests. They even thought that using tests didn't apply to what they had to do.

Anyway, I didn’t think there was enough time to explain in depth the benefits of using tests, or convincing everyone this was the best approach, even though I’m sure it’s a good and useful methodology for this type of courses.
On top of that, I believe I wasn’t clear enough with the scope I intended. Although it was clear we would learn about scripting tools, I also had the intention of providing tools for software development in general, which I think would result more useful in the long run.

I’ve come to the conclusion that this first course was really useful to try out and debug the chosen teaching technique. Although some aspects could be improved, I still believe that for a first iteration the results were quite satisfying.

As “iterative and incremental” methodology states, it’ll take a few more iterations to find those elements that need attention and make them right.

Conclusion

Generally speaking, I believe this course was a success. Based on students’ feedback, the course topics and provided materials will be really useful for their future work.

As a retrospective, these are the opportunities for improvement:

  • Do not assume that all students will have a similar level of knowledge and experience, but rather have a set of exercises for each level, so anyone can learn incrementally.
  • Refine the methodology of learning through testing, starting by having a series of problems which could be solved entirely during the course.

Giving such an intensive training course turned out to be a really challenging and satisfactory experience, and the results made everything worth the effort. I take with me the gratifying sensation of making students, including myself, learn some pretty useful things!