Flesch reading ease for stylometry?

The Flesch reading-ease score (FRES, also called FRE – ‘Flesch Reading Ease’) is still a popular measurement for the readability of texts, despite some criticism and suggestions for improvement since it was first proposed by Rudolf Flesch in 1948. (I’ve never read his original paper, though; all my information is taken from Wikipedia.) On a scale from 0 to 100, it indicates how difficult it is to understand a given text based on sentence length and word length, with a low score meaning difficult to read and a high score meaning easy to read.

Sentence length and word length are also popular factors in stylometry, the idea here being that some authors (or, generally speaking, kinds of text) prefer longer sentences and/or words while others prefer shorter ones. Thus such scores based on sentence length and word length might serve as an indicator of how similar two given texts are. In fact, FRES is used in actual stylometry, albeit only as one factor among many (e.g. in Brennan, Afroz and Greenstadt 2012 (PDF)). Over other stylometric indicators, FRES would have the added benefit that it actually says something in itself about the text, rather than being merely a number that only means something in relation to another.

The original FRES formula was developed for English and has been modified for other languages. In the last few stylometry blogposts here, the examples were taken from Japanese manga, but FRES is not well suited for Japanese. The main reason is that syllables don’t play much of a role in Japanese readability. More important factors are the number of characters and the ratio of kanji, as the number of syllables per character varies. A two-kanji compound, for instance, can have fewer syllables than a single-kanji word (e.g. 部長 bu‧chō ‘head of department’ vs. 力 chi‧ka‧ra ‘power’). Therefore, we’re going to use our old English-language X-Men examples from 2017 again.

The comics in question are: Astonishing X-Men #1 (1995) written by Scott Lobdell, Ultimate X-Men #1 (2001) written by Mark Millar, and Civil War: X-Men #1 (2006) written by David Hine. Looking at just the opening sequence of each comic (see the previous X-Men post for some images), we get the following sentence / word / syllable counts:

  • AXM: 3 sentences, 68 words, 100 syllables.
  • UXM: 6 sentences, 82 words, 148 syllables.
  • CW:XM: 7 sentences, 79 words, 114 syllables.

We don’t even need to use Flesch’s formula to get an idea of the readability differences: the sentences in AXM are really long and those in CW:XM are much shorter. As for word length, UXM stands out with rather long words such as “unconstitutional”, which is reflected in the high ratio of syllables per word.

Applying the formula (cf. Wikipedia), we get the following FRESs:

  • AXM: 59.4
  • UXM: 40.3
  • CW:XM: 73.3

Who would have thought that! It looks like UXM (or at least the selected portion) is harder to read than AXM – a FRES of 40.3 is already ‘College’ level according to Flesch’s table.

But how do these numbers help us if we’re interested in stylometric similarity? All three texts are written by different writers. So far we could only say (again – based on a insufficiently sized sample) that Hine’s writing style is closer to Lobdell’s than to Millar’s. The ultimate test for a stylometric indicator would be to take an additional example text that is written by one of the three authors, and see if its FRES is close to the one from the same author’s X-Men text.

Our 4th example will be the rather randomly selected Nemesis by Millar (2010, art by Steve McNiven) from which we’ll also take all text from the first few panels.

3 panels from Nemesis by Mark Millar and Steve McNiven

Part of the opening scene from Nemesis.

These are the numbers for the selected text fragment from Nemesis:

  • 8 sentences, 68 words, 88 syllables.
  • This translates to a FRES of 88.7!

In other words, Nemesis and UXM, the two comics written by Millar, appear to be the most dissimilar of the four! However, that was to be expected. Millar would be a poor writer if he always applied the same style to each character in each scene. And the two selected scenes are very different: a TV news report in UXM in contrast to a dialogue (or perhaps more like the typical villain’s monologue) in Nemesis.

Interestingly, there is a TV news report scene in Nemesis too (Part 3, p. 3). Wouldn’t that make for a more suitable comparison?

Here are the numbers for this TV scene which I’ll call N2:

  • 4 sentences, 81 words, 146 syllables.
  • FRES: 33.8

Now this looks more like Millar’s writing from UXM: the difference between the two scores is so small (6.5) that they can be said to be almost identical.

Still, we haven’t really proven anything yet. One possible interpretation of the scores is that the ~30-40 range is simply the usual range for this type of text, i.e. TV news reports. So perhaps these scores are not specific to Millar (or even to comics). One would have to look at similar scenes by Lobdell, Hine and/or other writers to verify that, and ideally also at real-world news transcripts.

On the other hand, one thing has worked well: two texts that we had intuitively identified as similar – UXM and N2 – indeed showed similar Flesch scores. That means FRES is not only a measurement of readability but also of stylometric similarity – albeit a rather crude one which is, as always, best used in combination with other metrics.


Trying to understand Latent Dirichlet Allocation

Latent Dirichlet Allocation (LDA) is one of the most popular algorithms for Topic Modeling, i.e. having a computer find out what a text is about. LDA is also perhaps easier to understand than the other popular Topic Modeling approach, (P)LSA, but even though there are two well-written blog posts that explain LDA (Edwin Chen’s and Ted Underwood’s) to non-mathematicians, it still took me quite some time to grasp LDA well enough to be able to code it in a Perl script (which I have made available on GitHub, in case anyone is interested). Of course, you can always simply use a software like Mallet that runs LDA over your documents and outputs the results, but if you want to know what LDA actually does, I suggest you read Edwin Chen’s and Ted Underwood’s blog posts first, and then, if you still feel you don’t really get LDA, come back here. OK?

Welcome back. Disclaimer: I’m not a mathematician and there’s still the possibility that I got it all wrong. That being said, let’s take a look at Edwin Chen’s first example again, and this time we’re going to calculate it through step by step:

  • I like to eat broccoli and bananas.
  • I ate a banana and spinach smoothie for breakfast.
  • Chinchillas and kittens are cute.
  • My sister adopted a kitten yesterday.
  • Look at this cute hamster munching on a piece of broccoli.

We immediately see that these sentences are about either eating or pets or both, but even if we didn’t know about these two topics, we still have to make an assumption about the number of topics within our corpus of documents. Furthermore, we have to make an assumption how these topics are distributed over the corpus. (In real life LDA analyses, you’d run the algorithm multiple times with different parameters and then see which fit best.) For simplicity’s sake, let’s assume there are 2 topics, which we’ll call A and B, and they’re distributed evenly: half of the words in the corpus belong to topic A and the other half to topic B.

Apparently, hamsters do indeed eat broccoli. Photograph CC-BY https://www.flickr.com/photos/carolyncoles/

What exactly is a word, though? I found the use of this term confusing in both Chen’s and Underwood’s text, so instead I’ll speak of tokens and lemmata: the lemma ‘cute’ appears as 2 tokens in the corpus above. Before we apply the actual LDA algorithm, it makes sense to not only tokenise but also lemmatise our 5 example documents (i.e. sentences), and also to remove stop words such as pronouns and prepositions, which may result in something like this:

  • like eat broccoli banana
  • eat banana spinach smoothie breakfast
  • chinchilla kitten cute
  • sister adopt kitten yesterday
  • look cute hamster munch piece broccoli

Now we randomly assign topics to tokens according to our assumptions (2 topics, 50:50 distribution). This may result in e.g. ‘cute’ getting assigned once to topic A and once to topic B. An initial random topic assignment may look like this:

  • like -> A, eat -> B, broccoli -> A, banana -> B
  • eat -> A, banana -> B, spinach -> A, smoothie -> B, breakfast -> A
  • chinchilla -> B, kitten -> A, cute -> B
  • sister -> A, adopt -> B, kitten -> A, yesterday -> B
  • look -> A, cute -> B, hamster -> A, munch -> B, piece -> A, broccoli -> B

Clearly, this isn’t a satisfying result yet; words like ‘eat’ and ‘broccoli’ are assigned to multiple topics when they should belong to only one, etc. Ideally, all words connected to the topic of eating should be assigned to one topic and all words related to pets should belong to the other. Now the LDA algorithm goes through the documents to improve this initial topic assignment: it computes probabilities which topic each token should belong to, based on three criteria:

  1. Which topics are the other tokens in this document assigned to? Probably the document is about one single topic, so if all or most other tokens belong to topic A, then the token in question should most likely also get assigned to topic A.
  2. Which topics are the other tokens in *all* documents assigned to? Remember that we assume a 50:50 distribution of topics, so if the majority of tokens is assigned to topic A, the token in question should get assigned to topic B to establish an equilibrium.
  3. If there are multiple tokens of the same lemma: which topic is the majority of tokens of that lemma assigned to? If most instances of ‘eat’ belong to topic A, then the token in question probably also belongs to topic A.

The actual formulas to calculate the probabilities given by Chen and Underwood seem to differ a bit from each other, but instead of bothering you with a formula, I’ll simply describe how it works in the example (my understanding being closer to Chen’s formula, I think). Let’s start with the first token of the first document (although the order doesn’t matter), ‘like’, currently assigned to topic A.

Should ‘like’ belong to topic B instead? If ‘like’ belonged to topic B, 3 out of 4 tokens in this document would belong to the same topic, as opposed to 2:2 if we stay with topic A. On the other hand, changing ‘like’ to topic B would threaten the equilibrium of topics over all documents: topic B would consist of 12 tokens and topic A of only 10, as opposed to the perfect 11:11 equilibrium if ‘like’ remains in topic A. In this case, the former consideration outweighs the latter, as the two factors get multiplied: the probability for ‘change this token to topic B’ is 3/4 * 1/12 = 6%, whereas the probability for ‘stay with topic A’ is 2/4 * 1/11 = 4.5%. We can also convert these numbers to absolute percentages (so that they add up to 100%) and say: ‘like’ is 57% topic B and 43% topic A.

What are you supposed to do with these percentages? We’ll get there in a minute. Let’s first calculate them for the next token, ‘eat’, because it’s one of those interesting lemmata with multiple tokens in our corpus. Currently, ‘eat’ in the first document is assigned to topic B, but in the second document it’s assigned to topic A. The probability for ‘eat stays in topic B’ is the same as the same as for ‘like stays in topic A’ above: within this document, the ratio of ‘B’ tokens to ‘A’ tokens is 2:2, which gives us 2/4 or 0.5 for the first factor; ‘eat’ would be 1 out of 11 tokens that make up topic B across all documents, giving us 1/11 for the second factor. The probability for ‘change eat to topic A’ is much higher, though, because there is already another ‘eat’ token assigned to this topic in another document. The first factor is 3/4 again, but the second is 2/12, because out of the 12 tokens that would make up topic A if we changed this token to topic A, 2 tokens would be of the same lemma, ‘eat’. In percentages, this means: this first ‘eat’ token is 74% topic A and only 26% topic B.

In this way we can calculate probabilities for each token in the corpus. Then we randomly assign new topics to each token, only this time not on a 50:50 basis, but according to the percentages we’ve figured out before. So this time, it’s more likely that ‘like’ will end up in topic B, but there’s still a 43% chance it will get assigned to topic A again. The new distribution of topics might be slightly better than the first one, but depending on how lucky you were with the random assignment in the beginning, it’s still unlikely that all tokens pertaining to food are neatly put in one topic and the animal tokens in the other.

The solution is to iterate: repeat the process of probability calculations with the new topic assignments, then randomly assign new topics based on the latest probabilities, and so on. After a couple of thousand iterations, the probabilities should make more sense. Ideally, there should now be some tokens with high percentages for each topic, so that both topics are clearly defined.

Only with this example, it doesn’t work out. After 10,000 iterations, the LDA script I’ve written produces results like this:

  • topic A: cute (88%), like (79%), chinchilla (77%), hamster (76%), …
  • topic B: kitten (89%), sister (79%), adopt (79%), yesterday (79%), …

As you can see, words from the ‘animals’ category ended up in both topics, so this result is worthless. The result given by Mallet after 10,000 iterations is slightly better:

  • topic 0: cute kitten broccoli munch hamster look yesterday sister chinchilla spinach
  • topic 1: banana eat piece adopt breakfast smoothie like

Topic 0 is clearly the ‘animal’ topic here. Words like ‘broccoli’ and ‘much’ slipped in because they occur in the mixed-topic sentence, “Look at this cute hamster munching on a piece of broccoli”. No idea why ‘spinach’ is in there too though. It’s equally puzzling that ‘adopt’ somehow crept into topic 1, which otherwise can be identified as the ‘food’ topic.

The reason for this ostensible failure of the LDA algorithm is probably the small size of the test data set. The results become more convincing the greater the number of tokens per document.

Detail from p. 1 of Astonishing X-Men (1995) #1 by Scott Lobdell and Joe Madureira. The text in the caption boxes (with stop words liberally removed) can be tokenised and lemmatised as: begin break man heart sear soul erik lehnsherr know world magneto founder astonishing x-men last bastion hope world split asunder ravage eugenics war human mutant know exact ask homo superior comrade day ask die

For a real-world example with more tokens, I have selected some X-Men comics. The idea is that because they are about similar subject matters, we can expect some words to be used in multiple texts from which topics can be inferred. This new test corpus consists of the first 100 tokens (after stop word removal) from each of the following comic books that I more or less randomly pulled from my longbox/shelf: Astonishing X-Men #1 (1995) by Scott Lobdell, Ultimate X-Men #1 (2001) by Mark Millar, and Civil War: X-Men #1 (2006) by David Hine. All three comics open with captions or dialogue with relatively general remarks about the ‘mutant question’ (i.e. government action / legislation against mutants, human rights of mutants) and human-mutant relations, so that otherwise uncommon lemmata such as ‘mutant’, ‘human’ or ‘sentinel’ occur in all three of them. To increase the number of documents, I have split each 100-token batch into two parts at semantically meaningful points, e.g. when the text changes from captions to dialogue in AXM, or after the voice from the television is finished in CW:XM.

Page 6, panel 1 from UItimate X-Men #1 by Mark Millar and Adam Kubert. Tokens: good evening boaz eshelmen watch channel nine new update tonight top story trial run sentinel hail triumphant success mutant nest los angeles uncover neutralize civilian casualty

I then ran my LDA script (as described above) over these 6 documents with ~300 tokens, again with the assumption that there are 2 equally distributed topics (because I had carelessly hard-coded this number of topics in the script and now I’m too lazy to re-write it). This is the result after 1,000 iterations:

  • topic A: x-men (95%), sentinel (93%), sentinel (91%), story (91%), different (90%), …
  • topic B: day (89%), kitty (86%), die (86%), …

So topic A looks like the ‘mutant question’ issue with tokens like ‘x-men’ and two times ‘sentinel’, even though ‘mutant’ itself isn’t among the high-scoring tokens. Topic B, on the other hand, makes less sense (Kitty Pryde only appears in CW:XM, so that ‘kitty’ occurs in merely 2 of the 6 documents), and its highest percentages are also much lower than those in topic A. Maybe this means that there’s only one actual topic in this corpus.

Page 1, panel 5 from Civil War: X-Men #1 by David Hine and Yanick Paquette. Tokens: incessant rain hear thing preternatural acute hearing cat flea

Running Mallet over this corpus (2 topics, 10,000 iterations) yields an even less useful result. The first 5 words in each topic are:

  • topic 0: mutant, know, x-men, ask, cooper
  • topic 1: say, sentinel, morph, try, ready

(Valerie Cooper and Morph are characters that appear in only one comic, CW:XM and AXM, respectively.)

Topic 0 at least associates ‘x-men’ with ‘mutant’, but then again, ‘sentinel’ is assigned to the other topic. Thus neither topic can be related to an intuitively perceived theme in the comics. It’s clear how these topics were generated though: there’s only 1 document in which ‘sentinel’ doesn’t occur, the first half of the CW:XM excerpt, in which Valerie Cooper is interviewed on television. But ‘x-men’ and ‘mutant’ do occur in this document, the latter even twice, and also ‘know’ occurs more frequently (3 times) here than in other documents.

So the results from Mallet and maybe even my own Perl script seem to be correct, in the sense that the LDA algorithm has been properly performed and one can see from the results how the algorithm got there. But what’s the point of having ‘topics’ that can’t be matched to what we intuitively perceive as themes in a text?

The problem with our two example corpora here was, they were still not large enough for LDA to yield meaningful results. As with all statistical methods, LDA works better the larger the corpus. In fact, the idea of such methods is that they are best applied to amounts of text that are too large for a human to read. Therefore, LDA might be not that useful for disciplines (such as comics studies) in which it’s difficult to gather large text corpora in digital form. But do feel free to e.g. randomly download texts from Wikisource, and you’ll find that within them, LDA is able to successfully detect clusters of words that occur in semantically similar documents.


My ideal (and somewhat random) X-Men team

On his weblog Kevin Reviews Uncanny X-Men, Kevin O’Leary had an interesting post last month in which he picked the six members of his “ideal X-Men team”. I liked the idea and thought I’d post my own version, albeit with a twist: instead of choosing from all X-Men comics ever published or which I’ve ever read, I just browsed through whatever comics I had currently at hand on my shelf and in my longbox, and from these I selected the characters that I found interesting for some reason or other. Here they are, in order of publication:

Morph

  • Morph from Scott Lobdell’s and Joe Madureira’s Astonishing X-Men v1, 1995 (“Age of Apocalypse” storyline): no idea why I own a copy of this comic book, which is mediocre at best. But Lobdell and Madureira employ Morph’s shapeshifting abilities for comedic purposes, which makes him the most memorable character here.

Bishop

  • Bishop from David Hine’s and Yanick Paquette’s Civil War: X-Men, 2007: while I find Bishop’s mutant power (“energy absorption and redirection” – Wikipedia) rather boring and himself as a character not very likeable, his backstory – coming from a dystopian future – makes for interesting storytelling material. In Civil War: X-Men, Bishop feels compelled to side with the government and turn against Cyclops and the other X-Men.

detail from Wolverine #306

  • Wolverine from Cullen Bunn’s and Paul Pelletier’s run on Wolverine v4, 2012: while Wolverine certainly isn’t an underexposed character, Bunn and Pelletier showed that his backstory still has some new plot devices in it. Plus, his regenerating abilities can be stunningly visualised, e.g. when half his face is blown off by a shotgun, and he regrows his eye during the same fight scene (in #306).

Warbird

  • Warbird from Marjorie Liu’s and Gabriel Hernandez Walta’s run on Astonishing X-Men, 2013: Warbird is a member of the Shi’ar alien race and not a mutated human, but her ‘otherness’ (which Liu frequently emphasised) matches that of the other X-Men misfits nicely.

detail from X-Treme X-Men #12

  • Nazi Xavier from Greg Pak’s and Andre Araujo’s X-Treme X-Men v2, 2013: it’s Charles Xavier, the popular telepath. Only he’s a nazi. X-Treme X-Men introduced many alternate versions of well-known characters from parallel worlds, one weirder than the other. Technically Nazi Xavier is a villain, not an X-Man, but Marvel never had much problems with changing a villain into a hero and vice versa. Such a ‘deal with the devil’ would create those tensions that seem to be all-important in any superhero team.

Magneto

  • Magneto from Cullen Bunn’s and Gabriel Hernandez Walta’s Magneto, 2014: Magneto has already undergone the treatment from villain to X-Man (and back again, probably several times), so it shouldn’t be a problem to have him on the team too. It would be interesting to have Holocaust survivor Magneto (don’t ask me how old he is supposed to be) on the same team as Nazi Xavier, but the reason I want Magneto on my ideal X-Men team is that it’s just so much fun to see him twisting and twirling pieces of metal around.

X-Men: Days of Continuity are Past

Who's that girl?

Who’s that girl?

X-Men: Days of Future Past is still being shown in German cinemas, and by now, probably more than a million people have seen it here. While I found it enjoyable enough, I’m still wondering who these Marvel films are made for. Or, to put it differently: are film makers still concerned about continuity at all, or is it considered nitpicking and party-pooping to point out continuity errors in this postmodern day and age?

Basically, I can think of four ways in which films deal with continuity:

a) the film is a stand-alone story and doesn’t need to adhere to any extra-textual continuity;

b) the film is part of a series of films and conforms to the continuity established by the earlier films;

c) the setting of the film (“world”/”universe”) is adapted from another medium and is consistent with the continuity established there;

d) the entire story of the film is adapted from another medium, and continuity is not an issue as long as the adaptation is faithful.

The problem with films like X-Men: Days of Future Past is that their category would be “e) all of the above”. There’s the continuity of the previous X-Men films and the continuity of countless X-Men comics, and X-Men: DoFP makes references to both and can’t be fully comprehended without ample knowledge of both. However, the two continuities are not quite compatible with each other, and each of them has its own issues, so it comes as no surprise that X-Men: DoFP isn’t free of continuity errors either. A month ago, Rob Bricken published this helpful overview on io9: http://io9.com/8-ways-x-men-movie-continuity-is-still-irretrievably-f-1581678509

Not mentioned there is the conundrum of Pietro/Peter Maximoff and his sister(s), which is explained in Empire magazine (see e.g. here).

All this makes me wonder: if everything we see in a film is potentially subject to later revisions, and ultimately nothing is authoritative, why do filmgoers still care about these stories at all? Many comic book readers, tired of convoluted continuities and endless retconning, have turned their backs on this kind of storytelling years ago. How long will it take cinema audiences to realise that all these superhero “cinematic universes” make little sense?


Sexy-lamp-testing Rick Remender

Continuing from the previous post, let’s turn to a gender bias test that some people believe to be superior to the Bechdel Test. In an interview last year, writer Kelly Sue DeConnick (Captain Marvel) said,

Nevermind the Bechdel test, try this: if you can replace your female character with a sexy lamp and the story still basically works, maybe you need another draft. They have to be protagonists, not devices.

This seems even more difficult to put into practice than the Bechdel Test. Is there a scholarly sound way to determine if a story “works”? Anyway, I’m going to try this test with two recent comic books written by Rick Remender. Mind you, that selection doesn’t mean I think Rick Remender is a sexist writer or anything. It’s just that he’s writing a lot of comic books at the moment, and by pure coincidence I happened to have read two of them, Black Science #1 and Uncanny Avengers #14. And who knows, maybe this comparison will reveal something about different attitudes towards gender issues at Image and Marvel, respectively.

The science fiction story Black Science (art by Matteo Scalera and Dean White, published by Image) starts with dimension-travelling scientist Grant McKay running away from fish monsters. He is accompanied by a sexy lamp in a space suit, and his internal monologue is addressed at another sexy lamp. Weird, but not that important for the story. His flight leads him to the den of some frog men, who have captured and enslaved a sexy lamp. McKay frees that lamp and returns her to the fish men, whereupon they become less hostile. There are some more sexy lamps towards the end of the issue, but they are not that significant.

Black Science #1 with sexy lamp added

Black Science #1: frog men ogling their stolen sexy lamp. Makes sense to me.

 

Overall, the story works almost as well with sexy lamps instead of female characters. The “damsel in distress” motif at work here is almost as objectifying as turning her into a lamp.

Uncanny Avengers #14 (pencils by Steve McNiven, inks by John Dell, colours by Laura Martin, published by Marvel) is part of a somewhat convoluted story. The gist is that one sexy lamp with magical powers wants to perform a ritual to defeat the two major supervillains of this story (one of which is a sexy lamp), while two other superheroes (again, one of them a sexy lamp) try to stop her because they think the ritual will help the villains. Of course, this conflict is resolved by means of a lot of fighty-fighty, in the course of which one sexy lamp kills another, only to be killed in turn by one of the supervillains.

panel from Uncanny Avengers #14 with sexy lamps added

Uncanny Avengers #14: clash of the sexy lamps. Makes no sense at all.

Clearly, this fighting and killing makes much more sense when done by the Scarlet Witch and Rogue, rather than by some sexy lamps. Therefore, Uncanny Avengers #14 passes the Sexy Lamp Test, whereas Black Science #1 fails.

Does that mean Uncanny Avengers is less gender biased then Black Science? Not necessarily. The problem with the Sexy Lamp Test is, it “rewards” comics with female characters who say and do a lot, but it doesn’t judge what they say and do. Despite their importance to the story, the female characters in Uncanny Avengers are “lazily” written – all women in this comic book could just as well be men (and vice versa) and nothing would change (except for Wonder Man and Scarlet Witch becoming a gay couple). These female superheroes are just male superheroes with breasts. On the other hand, the femaleness of the enslaved fish woman in Black Science reveals the society of the frog men as patriarchic, and thus at least serves a purpose within the story.

Therefore, I don’t think the Sexy Lamp Test is better at detecting gender bias than the Bechdel Test. They just point out different aspects of gender bias (in speech vs. in narrative function), so maybe they are best used in combination.


Reading my first crossover: X-Termination

Two different Nightcrawlers by three different artists: Age of Apocalypse Nightcrawler by Gabriel Hernandez Walta, Kurt Waggoner by André Araújo, and Age of Apocalypse Nightcrawler by Matteo Buffagni

Three times Nightcrawler by the three best X-Termination artists: Age of Apocalypse Nightcrawler by Gabriel Hernandez Walta, Kurt Waggoner by André Araújo, and Age of Apocalypse Nightcrawler by Matteo Buffagni.

It’s not that I’ve never read a crossover story before, but when I did, it was always after it had been collected into trade paperbacks. This allowed me to make a conscious decision to buy the TPBs. However, it’s quite a different thing when a comic book series you’ve subscribed to becomes part of a crossover. Do you really want to purchase additional comic books, from series you don’t care about, by creators you’re not interested in, just to be able to grasp the story in “your” series? In the past, my answer was no – for instance, I dropped Swamp Thing when the “Rotworld” crossover started.

This time, though, I decided to play along. I had been reading Astonishing X-Men (AXM) for some time (see my review of #48-51 and my previous blog post on #57) when the crossover event X-Termination was announced, spanning the books AXM, Age of Apocalypse, X-Treme X-Men and an eponymous mini-series. Here’s what I think of each issue.

Although not listed as part of X-Termination, the story actually starts in AXM #59.

Language: English
Authors: Marjorie Liu (writer), Gabriel Hernandez Walta (artist), Cris Peter (colourist)
Publisher: Marvel
Released: 2013-02-27
Pages: 19 (yes, that’s not a lot of pages for $3.99…)
Price: $3.99
Website: http://marvel.com/comics/series/744/astonishing_x-men_2004_-_2010 (yes, that’s the correct link to the current series…)

Previously in AXM: after the gay marriage storyline, the book focused on the character Karma and two other, virtually indistinguishable Asian women. I must say I had grown tired of Mike Perkins’s art, when Gabriel Hernandez Walta came to the rescue. Issue #58 was a filler one-shot, but in #59 we’re heading straight towards X-Termination. The X-Men are hunting an alternate universe version of Nightcrawler, who apparently has committed murder, off-panel. Not much happens in this issue, but the nice art makes it a worthwhile, atmospheric read.

The first official “prologue to X-Termination” is Age of Apocalypse #13.

Authors: David Lapham (writer), Renato Arlem & Valentine de Landro (artists), Lee Loughridge (colourist)
Released: 2013-03-06
Pages: 20
Price: $2.99
Website: http://marvel.com/comics/series/17278/age_of_apocalypse_2012_-_present (for some reason they split the series into two websites, “2011 – present” (#1-12) and “2012 – present” (#13-14))

Most of the story here takes place in an alternate reality – the “Age of Apocalypse” – and is (yet) unconnected to the events in AXM. The aim of this issue, it seems, is to recap the previous events in this series, and maybe even to introduce new readers to this post-apocalyptic setting with all its alternate versions of the X-Men. But I don’t find all these little episodes very enlightening. Then again, most of what happens here is of no importance to the crossover story anyway. It would just have been nice to get to know all the obscure characters which do play a role in X-Termination later. What really repels me, though, is the art: I can only guess that Renato Arlem and Lee Loughridge (I’m not sure what Valentine de Landro’s contribution to this book was) wanted to make the artwork suit the dark and grim atmosphere of the setting, but the result looks just murky at best.

The second prologue, according to an advertisement flyer, is X-Treme X-Men #12, even though it doesn’t say so anywhere in the issue.

Authors: Greg Pak (writer), Andre Araujo (artist), Jessica Kholinne & Gloria Caeli (colourists)
Released: 2013-03-13
Pages: 20
Price: $2.99
Website: http://marvel.com/comics/series/16308/x-treme_x-men_2012_-_present

In contrast to Age of Apocalype, X-Treme X-Men is a beauty to behold. André Araújo’s style of drawing is more cartoonish, almost manga-esque, yet in combination with the unobtrusive colouring reminiscent of European comics. Greg Pak tells the story of yet another alternate reality X-Men team, who witness the opening of a transdimensional rift and the arrival of the three supervillains of X-Termination. But he tells that story with lots of humour, it seems. Suffice to say that there are three evil versions of Professor Xavier: “Nazi Xavier”, “Witch King Xavier”, and “the Floating Head”. It’s a pity that X-Treme X-Men was cancelled after X-Termination, as this issue makes me want to read more of this series.

The first official part of X-Termination is X-Termination #1 (of 2).

Authors: David Lapham (writer), Lapham/Liu/Pak (story), David Lopez (penciller), Alvaro Lopez & Allen Martinez (inkers), Andres Mossa (colourist)
Released: 2013-03-20
Pages: 20
Price: $3.99
Website: http://marvel.com/comics/series/17743/x-termination_2013_-_present (again, the Marvel website lists several links…)

Meanwhile, another portal is opened from the “real” earth to the Age of Apocalypse, where the three X-Men teams meet, plus a fourth party, the aforementioned villainous trio. The art is the weak point of this book again; I find the way Lopez handles anatomies and facial expressions not very convincing.

For the next installment of X-Termination, we return to AXM (#60).

Authors: Marjorie Liu (writer), Lapham/Liu/Pak (story), Matteo Buffagni & Renato Arlem (artists), Christopher Sotomayor & Lee Loughridge (colourists)
Released: 2013-03-27
Pages: 20
Price: $3.99
Website: http://marvel.com/comics/series/744/astonishing_x-men_2004_-_2010

What a disappointment: while this issue is written by regular AXM writer Marjorie Liu, the art is not by Gabriel Hernandez Walta. Instead, the first half is drawn by Matteo Buffagni and coloured by Christopher Sotomayor, and the second half is drawn by Renato Arlem and coloured by Lee Loughridge. Buffagni and Sotomayor seem to go for a 90s vibe, with unnervingly bright colours. Arlem’s and Loughridge’s art is just as off-putting as in Age of Apocalypse #13. Story-wise, it’s mainly fighty-fighty here.

X-Termination continues in Age of Apocalypse #14.

Authors: David Lapham (writer), Lapham/Liu/Pak (story), Andre Araujo & Renato Arlem (artists), Cris Peter & Lee Loughridge (colourist)
Released: 2013-04-03
Pages: 20
Price: $2.99
Website: http://marvel.com/comics/series/17278/age_of_apocalypse_2012_-_present

Again there are two art teams in this comic book, but this time there is a system to the shifts: there’s beautiful art by André Araújo and Cris Peter in the “real world” scenes, and ugly art by Renato Arlem and Lee Loughridge in the “Age of Apocalypse” scenes. The fighting against the alien villains continues.

X-Termination part four is told in X-Treme X-Men #13.

Authors: Greg Pak (writer), Lapham/Liu/Pak (story), Guillermo Mogorron & Raul Valdes (artists), Ed Tadeo, Carlos Cuevas, Don Ho and Walden Wong (inkers), Lee Loughridge (colourists)
Released: 2013-04-10
Pages: 20
Price: $2.99
Website: http://marvel.com/comics/series/16308/x-treme_x-men_2012_-_present

More artists are introduced, while the story continues to leave me cold (despite referencing the Dark Phoenix saga). Mogorron’s and Valdes’s respective art styles are simplifying and cartoonish, which isn’t necessarily a bad thing, but here it just looks sloppy.

The penultimate X-Termination installment is AXM #61.

Authors: Marjorie Liu (writer), Lapham/Liu/Pak (story), Renato Arlem, Klebs deMoura, Matteo Buffagni, Raul Valdes, and Carlos Cuevas  (artists), Lee Loughridge & Christopher Sotomayor with Andres Mossa (colourists)
Released: 2013-04-17
Pages: 20
Price: $3.99
Website: http://marvel.com/comics/series/744/astonishing_x-men_2004_-_2010

Visually it gets even more confusing with not only two but three art teams in one issue, none of which I’m particularly fond of. Which is a shame, because the story finally seems to go somewhere, when the alternate universe version of Jean Grey is threatened to be corrupted by the power of the “Apocalypse Seed”.

The crossover story concludes in X-Termination #2.

Authors: David Lapham (writer), Lapham/Liu/Pak (story), David Lopez, Guillermo Mogorron, Raul Valdes, and Matteo Lolli (pencillers), Don Ho, Lorenzo Ruggiero, Carlos Cuevas, and Allen Martinez (inkers), Andres Mossa (colourist)
Released: 2013-04-24
Pages: 20
Price: $3.99
Website: http://marvel.com/comics/series/17744/x-termination_2013_-_present

Again there are just too many artists, some of which have produced here what might be among the worst art I’ve ever seen in a Marvel comic. The conclusion of the story doesn’t feel very epic, even though the three page epilogue adds a nice touch.

Overall, the X-Termination crossover feels like a waste of $ 27.92 and an unwelcome interruption of AXM, which in fact continues with #62 to be a strong series, well written and well drawn (by Hernandez Walta again). The only positive outcome for me was to discover André Araújo‘s art, of which I hope to see more in the future. Still, my personal reservations against crossover events have been confirmed, and I can’t help wondering why such marketing tricks, more often than not, achieve to boost the sales of all tie-in issues. Then again, the commercial success of X-Termination seems to have been moderate – after all, this isn’t exactly Marvel’s big summer event.

Rating: ● ● ○ ○ ○ (only due to AXM #59 and X-Treme X-Men #12 raising the average)


Astonishing X-Men is a gay soap opera (and that’s a good thing)

Review of Astonishing X-Men #48-51

Language: English
Authors: Marjorie Liu (writer), Mike Perkins (artist)
Publisher: Marvel
Pages: 20-26
Price: $3.99
Website: http://marvel.com/comic_books/series/14275/astonishing_x-men_2004_-_present

Recently, David Watkins said on HLN: “Comics and soaps have a lot in common — wild situations, love triangles, forbidden love, revenge and intense drama abound in both.”

I wouldn’t go as far as that. While such mostly romantic motifs can be found in many American mainstream superhero comics (Watkins mentions the X-Men and the Fantastic Four), they are dominated by other themes such as the supernatural, or physical fights between good and evil. Romance isn’t exclusive to soap operas, but their emphasis on romance is a defining characteristic. Astonishing X-Men, however, relies heavily on romance and thus gravitates towards the soap opera genre, as we will soon find out.

Previously in Astonishing X-Men: I didn’t read this series before Liu and Perkins started their run in #48, so I have no idea what was going on before. This version of the X-Men consists of the well-known characters Wolverine, Gambit and Iceman, and some not-so-well-known ones. At the center of this story is Northstar – if you don’t know who he is, I recommend this blog post at Major Spoilers.

Issue #48 is already surprising: four entire pages are devoted to Northstar and his non-superpowered boyfriend Kyle, who basically “only” talk about their new situation of living together in New York after years of long distance dating. Then we get to read three pages of Gambit and fellow mutant Cecilia Reyes, talking in his apartment. That makes a total of seven pages of pure soap opera. The remaining 13 pages feature rather generic action: the X-Men being attacked by a group of supervillains.

In issue #49 there’s another four-page dialogue between Northstar and Kyle, taking place after the aforementioned fight, with lines such as “I love you. I’ve never loved anyone as much as I love you” (Northstar).

Issue #50 contains the marriage proposal that got so much media attention. Consequently, the number pages devoted to Northstar and Kyle is increased to a whopping eight out of 20. Still, this issue also features the artistically best action scenes so far. The technique of Perkins and colorist Andy Troy to overlay delicate outlines with opaque highlighting effects gives a certain radiance to the drawings, which looks particularly good whenever Iceman is involved.

Finally, issue #51, the wedding issue. (The idea of the gay wedding, by the way, turned out not to have been Liu’s, but an editorial decision from long ago.) The action part of the story is reduced to six pages, the remaining 20 pages of this oversized issue being taken up mainly by the wedding preparations and ceremony. The fact that this is a gay wedding is hardly reflected at all. In two panels, two wedding guests express their mild discomfort (“it’s a lot to take in”, “I can’t stop thinking about what my grandma would say about all of this”). Then there’s the scene where Warbird refuses to attend the wedding, which I had thought was due to her not recognizing the validity of human weddings in general. But that’s probably just my lack of knowledge of the Shi’ar alien race to which Warbird belongs, because several other reviewers interpreted Warbird’s behaviour as a decidedly homophobic.

So large portions of this series read like a soap opera centered around a gay couple. Is this what I want to read in a superhero comic? Well, for me, drama, feelings, and relationships between superheroes have always been part of the appeal of the Marvel universe, and in particular of team series like the X-Men books, in which all characters seem to be related to or at least acquainted with each other. At any rate, it’s better than endless fisticuffs. Therefore I’m enjoying Astonishing X-Men.

As for the homosexuality aspect: though some people say that “all superheroes are gay“, Astonishing X-Men strongly focuses on homosexuality. Or does it? While in the real world, in the United States and elsewhere, gay marriage is still a controversial issue, we don’t really get to see that in the comic. In this fantasy world, homophobia is something that befalls only aliens, and everything is sunshine and roses. Marvel has found a way to make homosexuality palatable to their mainstream audience, and at the same time to appear to be bold and progressive. In spite (or exactly because) of that, this storyline will probably become an instant classic among scholars at the intersection of LGBT and comics studies.

Rating: ● ● ● ○ ○