Why is duolingo unpedagogic? (Part 2 – here’s why!)
In an earlier post I argued why Duolingo was not in fact unpedagogic despite a perception among some educators that it is. I argued that Duolingo does work (to an extent) and the success in learning vocabulary through a motivating and well designed learner experience was a good example of digital learning.
However, the perceptions of educators that is is pedagogically weak are not unfounded.
One of the things that frustrates me most and weakens its authority is the rather random assertion that language proficiency can be broken down into a percentage score.
According to Duolingo I have a fluency rating for French of 33%. This has been as high as 44% but I’ve admittedly slipped recently as have been struggling to find the time for language practice. A single exercise later where I translated “Je suis une baleine” and “Le cochon est un animal“ a few times and my fluency jumped up to 36%.
To say that I am a third fluent is ridiculous though. How can a percentage score for this be provided? What is it based on? Do I understand 33% of all language input? (I can’t; I still struggle to get anything more than the most basic gist when I hear people speak French – and, even then, only have any success when it’s parents speaking to young children!). Do I know 33% of the French lexicon? Potentially a third of the most common words but any other definition is clearly bullshit. I would not want to add this to my LinkedIn profile even if I was in the 80-90% bracket for fear that it reflected I had some sort of belief in this nonsense.
Admittedly, this doesn’t really have much to do with the method and practice of the actual teaching but it does reflect poorly on the overall feeling.
Specifically linked to the methodology is the fact that the content is introduced randomly. A new lesson can introduce new language but you are asked to produce this before it has even been taught. This is clearly flawed, and also something which could presumably be easily fixed with a tweak to the algorithm that decides on the order of the questions.
Although there can be a fair argument for introducing new language within context to get learners to try and figure out the meaning and usage from context, asking someone to translate something from L2 (second language, i.e. the one being learned) to L1 (first language) when the word has never previously been introduced is clearly flawed. This happens too often in Duolingo and, to me, points to a flaw in programming or a flaw in language acquisition understanding.
There are also times when the question type chosen to test something is not testing the target language. This screenshot shows a question in the lesson testing demonstratives in French (ceci/ceca/celui etc). All it is testing though is my knowledge of the words lit, parle and père. I do not need to show any understanding of the target language, and I am left none the wiser about it’s usage.
Any critique of Duolingo wouldn’t be complete without considering the nonsense it gets you to translate a lot of the time. I actually don’t mind this much as it still reinforces language and it can sometimes amuse, but it would be more useful to have sentences that can actually be used. Next time I’m in France I will try and slip “I am a duck” and “The shark is eating the dolphin” into conversation. If it’s like a Duolingo exercise I will have to try and slip it in about three or four times in a row before using some other language.
There are also some interesting differences between the web and mobile versions which hinder learning. The web has some very useful grammar explanations; clear usage notes which help understand the language being practiced. This is missing from the mobile version and I can’t understand why. I was really struggling with the usage of some possessive pronouns which a short explanation, as on the web, would have cleared up. Instead I had to seek explanation outside the app.
And conversely, the website does not have the breadth of exercise types seen in the app. The number of direct translations from L2 to L1 is tiring; why can’t this be mixed up more as in the app?
A final point is the lack of anything beyond the sentence level. There are no dialogues, there are no texts at a paragraph level or greater. There is no listening practice beyond single items. The web version provides an Immersion area which allows for translation of full texts, but this is missing from the mobile version. Even as you progress through the lessons there is no increase in the amount of language the learner has to deal with.
This is a significant weakness and one which engenders a lack of sense of progress as there is nothing to show the user that they can understand significantly more than they used to.
Maybe this is why they have added the contrived fluency indicator which now shows me at 41% fluent after doing some exercises in parallel to this blog post. I wish learning French really was that easy!