Flesch reading ease for stylometry?

The Flesch reading-ease score (FRES, also called FRE – ‘Flesch Reading Ease’) is still a popular measurement for the readability of texts, despite some criticism and suggestions for improvement since it was first proposed by Rudolf Flesch in 1948. (I’ve never read his original paper, though; all my information is taken from Wikipedia.) On a scale from 0 to 100, it indicates how difficult it is to understand a given text based on sentence length and word length, with a low score meaning difficult to read and a high score meaning easy to read.

Sentence length and word length are also popular factors in stylometry, the idea here being that some authors (or, generally speaking, kinds of text) prefer longer sentences and/or words while others prefer shorter ones. Thus such scores based on sentence length and word length might serve as an indicator of how similar two given texts are. In fact, FRES is used in actual stylometry, albeit only as one factor among many (e.g. in Brennan, Afroz and Greenstadt 2012 (PDF)). Over other stylometric indicators, FRES would have the added benefit that it actually says something in itself about the text, rather than being merely a number that only means something in relation to another.

The original FRES formula was developed for English and has been modified for other languages. In the last few stylometry blogposts here, the examples were taken from Japanese manga, but FRES is not well suited for Japanese. The main reason is that syllables don’t play much of a role in Japanese readability. More important factors are the number of characters and the ratio of kanji, as the number of syllables per character varies. A two-kanji compound, for instance, can have fewer syllables than a single-kanji word (e.g. 部長 bu‧chō ‘head of department’ vs. 力 chi‧ka‧ra ‘power’). Therefore, we’re going to use our old English-language X-Men examples from 2017 again.

The comics in question are: Astonishing X-Men #1 (1995) written by Scott Lobdell, Ultimate X-Men #1 (2001) written by Mark Millar, and Civil War: X-Men #1 (2006) written by David Hine. Looking at just the opening sequence of each comic (see the previous X-Men post for some images), we get the following sentence / word / syllable counts:

  • AXM: 3 sentences, 68 words, 100 syllables.
  • UXM: 6 sentences, 82 words, 148 syllables.
  • CW:XM: 7 sentences, 79 words, 114 syllables.

We don’t even need to use Flesch’s formula to get an idea of the readability differences: the sentences in AXM are really long and those in CW:XM are much shorter. As for word length, UXM stands out with rather long words such as “unconstitutional”, which is reflected in the high ratio of syllables per word.

Applying the formula (cf. Wikipedia), we get the following FRESs:

  • AXM: 59.4
  • UXM: 40.3
  • CW:XM: 73.3

Who would have thought that! It looks like UXM (or at least the selected portion) is harder to read than AXM – a FRES of 40.3 is already ‘College’ level according to Flesch’s table.

But how do these numbers help us if we’re interested in stylometric similarity? All three texts are written by different writers. So far we could only say (again – based on a insufficiently sized sample) that Hine’s writing style is closer to Lobdell’s than to Millar’s. The ultimate test for a stylometric indicator would be to take an additional example text that is written by one of the three authors, and see if its FRES is close to the one from the same author’s X-Men text.

Our 4th example will be the rather randomly selected Nemesis by Millar (2010, art by Steve McNiven) from which we’ll also take all text from the first few panels.

3 panels from Nemesis by Mark Millar and Steve McNiven

Part of the opening scene from Nemesis.

These are the numbers for the selected text fragment from Nemesis:

  • 8 sentences, 68 words, 88 syllables.
  • This translates to a FRES of 88.7!

In other words, Nemesis and UXM, the two comics written by Millar, appear to be the most dissimilar of the four! However, that was to be expected. Millar would be a poor writer if he always applied the same style to each character in each scene. And the two selected scenes are very different: a TV news report in UXM in contrast to a dialogue (or perhaps more like the typical villain’s monologue) in Nemesis.

Interestingly, there is a TV news report scene in Nemesis too (Part 3, p. 3). Wouldn’t that make for a more suitable comparison?

Here are the numbers for this TV scene which I’ll call N2:

  • 4 sentences, 81 words, 146 syllables.
  • FRES: 33.8

Now this looks more like Millar’s writing from UXM: the difference between the two scores is so small (6.5) that they can be said to be almost identical.

Still, we haven’t really proven anything yet. One possible interpretation of the scores is that the ~30-40 range is simply the usual range for this type of text, i.e. TV news reports. So perhaps these scores are not specific to Millar (or even to comics). One would have to look at similar scenes by Lobdell, Hine and/or other writers to verify that, and ideally also at real-world news transcripts.

On the other hand, one thing has worked well: two texts that we had intuitively identified as similar – UXM and N2 – indeed showed similar Flesch scores. That means FRES is not only a measurement of readability but also of stylometric similarity – albeit a rather crude one which is, as always, best used in combination with other metrics.

Advertisements

Kanji-kana ratio for stylometry?

I ended my blogpost on hiragana frequency as a stylometric indicator with the remark that, rather than the frequency distribution of different hiragana in the text, the ratio of kana to kanji is used as one of several key characteristics in actual stylometric analysis of Japanese texts. I was curious to find out if this number alone could tell us something about the 4 manga text samples in question (2 randomly selected scenes from Katsuhiro Ōtomo’s Akira and 2 series from Morning magazine, Miko Yasu’s Hakozume and Rito Asami’s Ichikei no karasu – in the following text referred to as A1, A2, M1 and M2, respectively). My intuition was that the results wouldn’t be meaningful because the samples were too small, but let’s see:

This time I chose a sample size of 200 characters (hiragana, katakana, and kanji) per text.

Among the first 200 characters in A1 (i.e. Akira vol. 5, p. 16), there are 113 hiragana, 42 katakana and 45 kanji. This results in a kanji-kana ratio of 45 : (113 + 42) = 0.29.

In A2 (Akira vol. 3, pp. 125 ff.), the first 200 characters comprise of 126 hiragana, 34 katakana, and 40 kanji, i.e. the kanji-kana ratio is 0.25.

In M1, there are 122 hiragana, 9 katakana, and 69 kanji, resulting in a kanji-kana ratio of 0.52.

In M2, there are 117 hiragana, 0 katakana, and 83 kanji, resulting in a kanji-kana ratio of 0.71!

6 hiragana, 2 katakana, 3 kanji in A2 (Akira vol. 3, p. 125).

Thus this time the authorship attribution seems to have worked: the two Ōtomo samples have an almost identical score, whereas those of the two Morning samples are completely different. Interestingly, this result contradicts the interpretation from the earlier blogpost in which I had suggested that the scientists in Akira and the lawyers in Karasu have similar ways of talking. The difference in the kanji-kana ratio between Akira and the two Morning manga, though, is explained not only through the more frequent use of kanji in the latter, but also through the vast differences in katakana usage (note that only characters in proper word balloons, i.e. dialogue, are counted, not sound effects).

Ōtomo uses katakana for two different purposes: in A1 mainly to reproduce the names of the foreign researchers, and in A2 to stretch syllables otherwise written in hiragana at the end of words, e.g. なにィ nanii (“whaaat?”) or 何だァ nandaa (“what is iiit?”). Therefore the similarity of the character use in the two Akira samples is superficial only and the pure numbers somewhat misleading. On the other hand, it makes sense that an action-packed scene such as A2 contains less than half as many kanji as the courtroom dialogue in M2; in A2 there are more simple, colloquial words for which the hiragana spelling is more common, e.g. くそう kusou (“shit!”) or うるせェ urusee (“quiet!”), whereas technical terms such as 被告人 hikokunin (“defendant”) in M2 are more clearly and commonly expressed in kanji.

In the end, the old rule applies: only with a large number of sample texts, with a large size of each sample, and through a combination of several different metrics can such stylometric approaches possibly succeed.


Hiragana for stylometry?

The other day I’ve been made aware that some things I’ve said in an earlier blogpost, “Author dictionaries and lexical analysis for comics”, might be misleading. So let’s be clear: if you would like to find something out about the writing style of an author or text, it’s not the best idea to look at the frequently used nouns, kanji, or other units of high semantic content. Those are more useful for analysing the content, i.e. the topic(s), of texts. In stylometry, units with low semantic content, such as function words (the, a, it, etc.), are more attractive objects of study, as they can be used almost independently of the topic and often present writers with a choice of which word to use when. In other words, the same writer tends to use the same function words and may be identified by them. (In practice, though, a combination of different characteristics is used for analysis – see the Stylometry article at Wikipedia and the references there.)

In order to automatically separate function words from content words in a digital text, part-of-speech tagging software may be employed. For Japanese, there is e.g. Kuromoji. But isn’t there a simpler way? Can’t we make use of the kanji–kana distinction used in the aforementioned earlier blogpost? If we identified kanji as the semantically rich(er) units, wouldn’t it be sufficient to focus on the kana for stylometric analysis? Maybe, maybe not. The results would probably be poorer, due to two main reasons:

  1. Every content word (noun, verb, adjective), even if usually written in kanji, may also be written in kana. For instance, 分かる (to understand) is more frequently spelled in hiragana only, わかる. So when we gather kana from a text, we might end up with unwanted content words.
  2. In flection suffixes, hiragana are dependent on the preceding kanji, and thus ultimately on the content of the text. For instance, a text on musical performance might contain many instances of the verb 引く hiku (to play an instrument), so one can expect the hiragana か ka, ki, ku, ke and こ ko to occur more frequently than in other texts, as they are used for inflecting 引く.

That being said, why don’t we put this kana analysis method to the test anyway? Let’s take the example from Akira vol. 5, p. 16 again in which the scientists are talking (初めまして。スタンリー・シモンズ博士です etc.). We’ll focus on hiragana and ignore katakana, as they tend to be used for nouns too. Starting from those two panels, I manually counted these and the following hiragana until I reached 100. Here are the 5 most frequent hiragana in this set:

  • de: 8
  • i: 7
  • shi: 7
  • te: 7
  • no: 6

That means, if this was a sufficiently large sample, in any other piece of text by Ōtomo, or at least within Akira, roughly 8% of its hiragana should be de, 7% should be i, etc. So I randomly picked another scene from Akira (vol. 3, p. 125 ff) and looked at the first 100 hiragana there. The 5 most frequently used hiragana from the previous example are used less often here, with the exception of i:

de, su, u, ru, se, da

  • de: 3
  • i: 8
  • shi: 1
  • te: 2
  • no: 3

In these pages in vol. 3, we find mainly other hiragana such as tsu (9 times – including small tsu), ga (6 times), o (5 times) and su (5 times) to be the most frequently used. That, however, doesn’t tell us anything yet about the similarity of these two pieces of text (which I’m going to call “Akira 1″ and “Akira 2″ from here on). We need to add a third example, and for this purpose I’m going to use 100 hiragana from Miko Yasu’s Hakozume from the recently reviewed Morning magazine. If our method is successful, the differences between Hakozume and each of the two Akira scenes should be larger than those between Akira 1 and Akira 2. With frequency values for approximately 50 distinct hiragana we now have 3 × ~50 data points on which we could unleash the whole range of advanced statistical methods. But we’ll keep things simple by simply adding up the differences in frequencies: Hakozume contains only 6 instances of de, i.e. 2 less than Akira 1; Hakozume uses 3 times i as opposed to the 7 in Akira 1, i.e. 4 less; Hakozume contains 6 instances of shi less than Akira 1; etc. Here’s the table of frequencies of de, i, shi, te and no in Hakozume:

a, no, na, n, de, a, no, ga…

  • de: 6
  • i: 3
  • shi: 1
  • te: 6
  • no: 8

The combined difference between Hakozume and Akira 1 for these 5 hiragana would be 2+4+6+1+2 = 15. For all ~50 different hiragana, the sum is 96.

This looks like a large number, and indeed, when we calculate the difference between Akira 1 and Akira 2 in this way, the result is 82. This means, the two Akira chunks are more similar in their usage of hiragana than Hakozume and Akira 1.

However, we’re not done yet. We still need to compare Hakozume to Akira 2. The result of this comparison may come as a surprise: the sum of differences is also 82! So Akira 2 is as similar to Hakozume as it is to Akira 1. If our goal was to find out whether a given piece of text is taken from Akira or not, our method would fail if we used Akira 2 as our base text with which to compare all others.

ha, no, ki, ka, ra, ho, do, de, ki, wo…

Just to make sure, I took another 100 hiragana from a different random manga in the same issue of Morning, Rito Asami’s Ichikei no karasu. I’ll refer to Ichikei no karasu as Morning 2 from now on, and to Hakozume as Morning 1. The results of the comparisons are even ‘worse’: while the sum of differences between Morning 2 and Akira 2 is 98 – i.e. vastly different – the difference between Morning 2 and Akira 1 is only 74, i.e. very similar.

Frequency of all hiragana in each of the four 100-hiragana samples

In a way, the results do make sense though. We’re looking at dialogue, after all, and the way scientists (in Akira 1) speak is closer to that of lawyers (in Morning 2) than that of insurgent thugs (in Akira 2). And apparently, the conversation between the two policewomen (in Morning 1) is not quite unlike the latter.

As ever so often we could now blame the unsatisfactory results on the small sample size – if we had used chunks of 1000 hiragana instead of 100, surely our attribution attempts would have been more successful? We’ll never find out (unless we obtain a complete digital copy of Akira and extract the hiragana automatically). Another way to improve results would be to tweak the methodology: using data mining algorithms, more elaborate metrics such as co-occurrence of several hiragana could be employed. In actual stylometric research, hiragana seem to be used in yet another metric – the ratio of all hiragana to all other characters (kanji, katakana, rōmaji).


Author dictionaries and lexical analysis for comics

Every once in a while I learn something at my day job that I think would be applicable to comics research too. For instance, in literary studies, dictionaries are compiled that contain all the words (or only the nouns, similar to an encyclopedia) used by a particular author, or even only those used in one single literary text. Think of it as a sort of commentary in a critical edition which explains references to real-world entities, or obscure words that aren’t used anymore, only separate from the source text and organised alphabetically.

Applying this method to comics, we would, of course, ignore all the images and lose the information they convey. On the other hand, looking at the words alone might yield interesting results. For instance, by comparing the frequency of words used in a particular comic to the frequency with which they occur in written language in general, we could test common hypotheses such as “author X uses word Y a lot”.

For comics of more than a few pages length, it would be nice to automatically create a list of all the words in digital form (at least those in speech/thought bubbles and captions – sound effects and inscriptions/labels can be difficult to automatically recognise). Unless a script for the comic you’re interested in is already available, a straightforward (though not necessarily easy) way to get such a list would be to obtain digital images (e.g. scans) of the pages of the comic, then run Optical Character Recognition (OCR) software on them.

As an example, consider these two panels from Akira, in which a scientist is introduced to some colleagues:

two panels from Katsuhiro Otomo's AkiraThe OCR software www.onlineocr.net recognises the text in the five speech bubbles like this:

  1. 初めまして
  2. スタンリー・
    シモンズ博士
    ですノ
  3. よろしく
  4. ジョノレジュ
    ホックです
    よろしく・・
  5. 初めまして
    お名前は
    かねがね

As far as I can see, only two mistakes (ノレ instead of ル and ですノ instead of です) were made. Instead of focusing on nouns (for which there probably are detecting algorithms for Japanese), it’s easier for now to just look at the kanji and filter out all hiragana and katakana characters. (While you can’t simply say that kanji represent nouns and kana represent other parts of speech, the idea here is that kanji tend to carry more semantic information than kana, which are often only used as flection suffixes.) That leaves us with the six kanji , 名, 前, 博, 士, and 初 again.

We can look up their frequency with which they occur in Japanese language in general, e.g. the frequency rank at WWWJDIC:

  • 前: 27
  • 初: 152
  • 名: 177
  • 士: 526
  • 博: 794

i.e. 前 is the most frequent of the five, 博 the least frequent. Compare these ranks to the frequency with which they occur in our slim sample of two panels:

  • : 33% of all kanji
  • 前, 名, 士, 博: 17% each

What we can see here, if anything, is that two kanji, 士 and 博, are significantly more often used by Katsuhiro Ōtomo than by the average Japanese author. This doesn’t come as a surprise, as the compound 博士 signifies the academic title ‘Dr.’, which is the appropriate form of address for the scientists in this scene, whereas the other kanji 前, 初 and 名 are linked to names and introductions in general, and thus more often used in standard Japanese.

However, even if the frequency of 士 and 博 remained above-average if we analysed all of Akira‘s over 2000 pages, that wouldn’t necessarily mean we had discovered a lexical characteristic of Ōtomo’s writing style. What it would tell us is that there is a subplot about scientists in Akira. Of course, topic analysis based on word frequency is nothing new. More interesting from a formal-lexical point of view would be if we discovered kanji used in different frequencies than we would expect with regard to the subject matter treated in Akira. In this situation it might be useful to look at synonyms: when Ōtomo had several options to express the same thing, why did he choose some words over others?

panel detail from Akira by Katsuhiro ŌtomoFor instance, on the same page as the example above, the relatively infrequent (rank 920) kanji 栄 is used as part of the word “honour” in the expression “I’m honoured to meet you”. Instead, Ōtomo could have used the phrase “nice to meet you” for a third time, using the kanji 初 again, but he didn’t. Suppose there was a significant number of further instances of 栄 in Akira, maybe that would be a formal-stylistic choice, rather than one merely implied by the content of the comic?

I’m aware that all this is very hypothetical, and that looking at just a few panels doesn’t show anything, but if I wanted to analyse a comic in this way, I would basically go on about it as described here, only with more scans. If you would like to learn more about this kind of analysis, I recommend Allen Riddell’s tutorial on “Feature selection: finding distinctive words”.