Marcel H., anime killer?

This is the second blog post of a series on the occasion of ‘100 Years of Anime’. Read the first post here.

On this day three months ago, the memorial service for Jaden F. was held in Herne, Germany. Jaden had been the first of two victims stabbed to death by Marcel H., whom the media has linked to anime. One German news magazine in particular, Stern (No. 12, March 16), has emphasised the ostensible connections of the murders to anime.

The events were also covered by international media (e.g. Daily Mail, Telegraph, Independent), but none of them even mentioned anime. Therefore, the (thankfully limited and short-lived) ‘moral panic’ regarding anime doesn’t seem to have reached the Anglophone anime blogosphere either, which is why I’ll sum up the story here.

These are the facts: Marcel H. is a 19-year old NEET who had unsuccessfully applied to join the Army in February. On March 6, he lured the nine-year old neighbours’ son into his house and killed him with a knife. Then he went to an acquaintance’s, 22-year old Christopher W., and killed him early in the morning on March 7. Marcel H. stayed at Christopher W.’s apartment until March 9, when he set it on fire, went to a Greek diner, told the owner to call the police, and let himself be arrested.

So far, these events have nothing to do with anime. But Barbara Opitz and Lisa McMinn, the authors of the Stern article, point out the following details: when Marcel H. was arrested at the diner, he carried an umbrella and a bag of onions with him. These items are mentioned in other news articles too, but only Stern offers an explanation, according to which the umbrella and the onions refer to two cards from the Yu-Gi-Oh! Trading Card Game, “Rain of Mercy” and “Glow-Up Bulb” (“Aufblühende Blumenzwiebel” in German; “Zwiebel” can also mean “onion”), respectively. Furthermore, on one of the pictures Marcel posted online on which he poses with a knife, a poster of the anime series Yu-Gi-Oh! GX can be seen in the background. (Interestingly, in the Daily Mail article, the image – pictured below on the right hand side – was altered so that the poster doesn’t refer to Yu-Gi-Oh! anymore.)

Another connection to Yu-Gi-Oh! is Christopher W., Marcel H.’s second victim, who ran a Yu-Gi-Oh! site on Facebook; apparently they got to know each other through the game and used to play Yu-Gi-Oh! video games together. Finally, Stern points out that there are two characters in the Yu-Gi-Oh! anime with the same first names as Marcel H. and Jaden F.: Yu-Gi-Oh! GX protagonist Jaden Yuki and his antagonist Marcel Bonaparte. Stern implies that Marcel H. identified with the villain and acted out the Yu-Gi-Oh! story by attacking Jaden. The only detail that doesn’t quite fit is that the Stern article also says that Marcel H. had been learning Japanese in order to be able to read manga and watch anime in their original language; in the Japanese original version of Yu-Gi-Oh! GX, however, Jaden is called “Jūdai” and Marcel “Marutan” or “Martin”.

Apart from the Yu-Gi-Oh! connection, there’s not much that links Marcel H. to anime. Some chat messages have surfaced in which Marcel H. talks to another person about the murders at the time when he committed them, and in one message he says, “See you space cowboy”, which indeed is a quote from the anime Cowboy Bebop.

The other things mentioned in the Stern article are rather vague connections to Japan than to anime specifically: at the time of committing the murders, Marcel H. posted a picture of a handwritten note on which he had signed his name in Japanese, and he owned “bamboo swords which he kept under his bed like a treasure. Furthermore a wooden bow and five Japanese ceremonial knives” (all translations mine).

The sad and disturbing thing (apart from the murders themselves, of course) is how Stern chose to focus on Marcel H.’s anime fandom, instead of e.g. his obsession with martial arts, computer games, or 4chan (as other news outlets did, sometimes inaccurately calling it “darknet”). For instance, the entire Stern article is titled, “‘Viel Spaß in der Anime-Welt” (“‘Have Fun in the Anime World'”), which isn’t even a quote by Marcel H. but by his unnamed chat partner. The way in which the Stern authors desperately try to link the content of anime to the murderer is simply journalistically unethical: “‘Space Cowboy’ refers to a character from the anime series, ‘Cowboy Bebob’ [sic], in which a hero says sentences like this one: ‘I don’t go to die, but to find out if I’m still alive.’ Marcel H. is obsessed with the world of anime, Japanese animated films, often dark dystopias, the protagonists have spiky hair and shiny, big eyes. […] the heroes […] are often outsiders, but with hidden powers. Quirky, awkward and at the same time infallible. Outsiders like Marcel H.”

Luckily, the Stern article has failed to start a witch hunt on anime fans like the ones that e.g. video gamers and heavy metal fans have had to endure in past decades. But the article shows that anime has still a long way to go before it can be said to be part of the mainstream.


Exhibition review: Comics! Mangas! Graphic Novels!, Bonn

Last month, “the most comprehensive exhibition about the genre to be held in Germany” opened at the venerable Bundeskunsthalle in Bonn, where it can be visited until September 10. Curated by Alexander Braun and Andreas Knigge, it is a remarkable exhibition, not only because of its size (300 exhibits) but also because it tries to encompass the whole history of comics without any geographic, chronological or other limits. To this end, it is organised in six sections.

The first section is about early American newspaper strips. The amount of original newspaper pages and original drawings on display here would be impressive if there hadn’t been another major exhibition on the same topic not even a year ago. Still, it’s always interesting to see e.g. a Terry and the Pirates ink drawing alongside the corresponding printed coloured Sunday page (July 24, 1942). Another highlight in this section is an old Prince Valiant printing plate, or more precisely, a letterpress zinc cliché which would be transferred on a flexible printing plate for the cylinder of a rotary press, as the label in the display case explains.

Section 2 stays in the US but moves on to comic books. In its first of two rooms we find mainly superhero comics, again often represented through original drawings e.g. from Watchmen or Elektra: Assassin. The second room of this section is about non-superhero comic books; outstanding exhibits here are the complete ink drawings to two short stories: a 7-page The Spirit story by Will Eisner from July 15, 1951, and a 6-page war story from Two-Fisted Tales by Harvey Kurtzman from 1952.

The next section of the exhibition is dedicated to Francobelgian comics. There’s an interesting display case with a side-by-side comparison of the same page of Tintin in various original and translated editions, and there are also original drawings by Hergé, but perhaps even more impressive is an original inked page from Spirou et Fantasio by Tome and Janry, who revitalised the series in the 80s. In the same section, half a room contains examples of old German comics, both from East and West Germany.

And then we get to section 4, the manga section. The biggest treat here are several Osamu Tezuka original drawings from Janguru Taitei, Tetsuwan Atomu and Buddha. There’s original Sailor Moon art by Naoko Takeuchi as well. Most of the other exhibits, however, are from manga that are far less famous, at least outside of Japan. In this section there’s also the only factual error I found in the exhibition: a label on Keiji Nakazawa’s Hadashi no Gen says, “Barefoot Gen is one of the earliest autobiographical comics ever.” While Hadashi no Gen was certainly inspired by Nakazawa’s own experiences, it is a fictional story, not an autobiography – that would be Nakazawa’s earlier, shorter manga, Ore wa Mita.

Section 5 is about underground and alternative comics from both the US and Europe. The highlight here is the famous Cheap Thrills record by Big Brother and the Holding Company, which can be listened to via headphones. Most comics enthusiasts are familiar with the record cover by Robert Crumb, but perhaps not with the music on the album.

The sixth and last section is titled “Graphic Novels”. It is already unfortunate enough to make the dreaded ‘g-word’ part of the exhibition title, but this section makes things worse by not actually problematising the term or even analysing the discourse around it. Instead, “graphic novel” is meant here to comprise a vast range of contemporary comic production, including Jirō Taniguchi’s manga, pamphlet comic books such as Eightball and Love & Rockets, and Raw magazine.

The exhibition as a whole offers a lot of interesting things to see, but maybe its aim to represent the whole comics medium was too ambitious in the first place. Nowadays, no one would dare to make an exhibition about the whole history of film, or photography, but apparently comics are still considered peripheral enough that the whole medium can be squeezed into one wing of a museum. The general public, at whom this exhibition is presumably targeted, will probably discover many new things about comics, but for people who are already comic experts, the knowledge to be gained from this exhibition will be much smaller.

Rating: ● ● ● ○ ○

Trying to understand Latent Dirichlet Allocation

Latent Dirichlet Allocation (LDA) is one of the most popular algorithms for Topic Modeling, i.e. having a computer find out what a text is about. LDA is also perhaps easier to understand than the other popular Topic Modeling approach, (P)LSA, but even though there are two well-written blog posts that explain LDA (Edwin Chen’s and Ted Underwood’s) to non-mathematicians, it still took me quite some time to grasp LDA well enough to be able to code it in a Perl script (which I have made available on GitHub, in case anyone is interested). Of course, you can always simply use a software like Mallet that runs LDA over your documents and outputs the results, but if you want to know what LDA actually does, I suggest you read Edwin Chen’s and Ted Underwood’s blog posts first, and then, if you still feel you don’t really get LDA, come back here. OK?

Welcome back. Disclaimer: I’m not a mathematician and there’s still the possibility that I got it all wrong. That being said, let’s take a look at Edwin Chen’s first example again, and this time we’re going to calculate it through step by step:

  • I like to eat broccoli and bananas.
  • I ate a banana and spinach smoothie for breakfast.
  • Chinchillas and kittens are cute.
  • My sister adopted a kitten yesterday.
  • Look at this cute hamster munching on a piece of broccoli.

We immediately see that these sentences are about either eating or pets or both, but even if we didn’t know about these two topics, we still have to make an assumption about the number of topics within our corpus of documents. Furthermore, we have to make an assumption how these topics are distributed over the corpus. (In real life LDA analyses, you’d run the algorithm multiple times with different parameters and then see which fit best.) For simplicity’s sake, let’s assume there are 2 topics, which we’ll call A and B, and they’re distributed evenly: half of the words in the corpus belong to topic A and the other half to topic B.

Apparently, hamsters do indeed eat broccoli. Photograph CC-BY https://www.flickr.com/photos/carolyncoles/

What exactly is a word, though? I found the use of this term confusing in both Chen’s and Underwood’s text, so instead I’ll speak of tokens and lemmata: the lemma ‘cute’ appears as 2 tokens in the corpus above. Before we apply the actual LDA algorithm, it makes sense to not only tokenise but also lemmatise our 5 example documents (i.e. sentences), and also to remove stop words such as pronouns and prepositions, which may result in something like this:

  • like eat broccoli banana
  • eat banana spinach smoothie breakfast
  • chinchilla kitten cute
  • sister adopt kitten yesterday
  • look cute hamster munch piece broccoli

Now we randomly assign topics to tokens according to our assumptions (2 topics, 50:50 distribution). This may result in e.g. ‘cute’ getting assigned once to topic A and once to topic B. An initial random topic assignment may look like this:

  • like -> A, eat -> B, broccoli -> A, banana -> B
  • eat -> A, banana -> B, spinach -> A, smoothie -> B, breakfast -> A
  • chinchilla -> B, kitten -> A, cute -> B
  • sister -> A, adopt -> B, kitten -> A, yesterday -> B
  • look -> A, cute -> B, hamster -> A, munch -> B, piece -> A, broccoli -> B

Clearly, this isn’t a satisfying result yet; words like ‘eat’ and ‘broccoli’ are assigned to multiple topics when they should belong to only one, etc. Ideally, all words connected to the topic of eating should be assigned to one topic and all words related to pets should belong to the other. Now the LDA algorithm goes through the documents to improve this initial topic assignment: it computes probabilities which topic each token should belong to, based on three criteria:

  1. Which topics are the other tokens in this document assigned to? Probably the document is about one single topic, so if all or most other tokens belong to topic A, then the token in question should most likely also get assigned to topic A.
  2. Which topics are the other tokens in *all* documents assigned to? Remember that we assume a 50:50 distribution of topics, so if the majority of tokens is assigned to topic A, the token in question should get assigned to topic B to establish an equilibrium.
  3. If there are multiple tokens of the same lemma: which topic is the majority of tokens of that lemma assigned to? If most instances of ‘eat’ belong to topic A, then the token in question probably also belongs to topic A.

The actual formulas to calculate the probabilities given by Chen and Underwood seem to differ a bit from each other, but instead of bothering you with a formula, I’ll simply describe how it works in the example (my understanding being closer to Chen’s formula, I think). Let’s start with the first token of the first document (although the order doesn’t matter), ‘like’, currently assigned to topic A.

Should ‘like’ belong to topic B instead? If ‘like’ belonged to topic B, 3 out of 4 tokens in this document would belong to the same topic, as opposed to 2:2 if we stay with topic A. On the other hand, changing ‘like’ to topic B would threaten the equilibrium of topics over all documents: topic B would consist of 12 tokens and topic A of only 10, as opposed to the perfect 11:11 equilibrium if ‘like’ remains in topic A. In this case, the former consideration outweighs the latter, as the two factors get multiplied: the probability for ‘change this token to topic B’ is 3/4 * 1/12 = 6%, whereas the probability for ‘stay with topic A’ is 2/4 * 1/11 = 4.5%. We can also convert these numbers to absolute percentages (so that they add up to 100%) and say: ‘like’ is 57% topic B and 43% topic A.

What are you supposed to do with these percentages? We’ll get there in a minute. Let’s first calculate them for the next token, ‘eat’, because it’s one of those interesting lemmata with multiple tokens in our corpus. Currently, ‘eat’ in the first document is assigned to topic B, but in the second document it’s assigned to topic A. The probability for ‘eat stays in topic B’ is the same as the same as for ‘like stays in topic A’ above: within this document, the ratio of ‘B’ tokens to ‘A’ tokens is 2:2, which gives us 2/4 or 0.5 for the first factor; ‘eat’ would be 1 out of 11 tokens that make up topic B across all documents, giving us 1/11 for the second factor. The probability for ‘change eat to topic A’ is much higher, though, because there is already another ‘eat’ token assigned to this topic in another document. The first factor is 3/4 again, but the second is 2/12, because out of the 12 tokens that would make up topic A if we changed this token to topic A, 2 tokens would be of the same lemma, ‘eat’. In percentages, this means: this first ‘eat’ token is 74% topic A and only 26% topic B.

In this way we can calculate probabilities for each token in the corpus. Then we randomly assign new topics to each token, only this time not on a 50:50 basis, but according to the percentages we’ve figured out before. So this time, it’s more likely that ‘like’ will end up in topic B, but there’s still a 43% chance it will get assigned to topic A again. The new distribution of topics might be slightly better than the first one, but depending on how lucky you were with the random assignment in the beginning, it’s still unlikely that all tokens pertaining to food are neatly put in one topic and the animal tokens in the other.

The solution is to iterate: repeat the process of probability calculations with the new topic assignments, then randomly assign new topics based on the latest probabilities, and so on. After a couple of thousand iterations, the probabilities should make more sense. Ideally, there should now be some tokens with high percentages for each topic, so that both topics are clearly defined.

Only with this example, it doesn’t work out. After 10,000 iterations, the LDA script I’ve written produces results like this:

  • topic A: cute (88%), like (79%), chinchilla (77%), hamster (76%), …
  • topic B: kitten (89%), sister (79%), adopt (79%), yesterday (79%), …

As you can see, words from the ‘animals’ category ended up in both topics, so this result is worthless. The result given by Mallet after 10,000 iterations is slightly better:

  • topic 0: cute kitten broccoli munch hamster look yesterday sister chinchilla spinach
  • topic 1: banana eat piece adopt breakfast smoothie like

Topic 0 is clearly the ‘animal’ topic here. Words like ‘broccoli’ and ‘much’ slipped in because they occur in the mixed-topic sentence, “Look at this cute hamster munching on a piece of broccoli”. No idea why ‘spinach’ is in there too though. It’s equally puzzling that ‘adopt’ somehow crept into topic 1, which otherwise can be identified as the ‘food’ topic.

The reason for this ostensible failure of the LDA algorithm is probably the small size of the test data set. The results become more convincing the greater the number of tokens per document.

Detail from p. 1 of Astonishing X-Men (1995) #1 by Scott Lobdell and Joe Madureira. The text in the caption boxes (with stop words liberally removed) can be tokenised and lemmatised as: begin break man heart sear soul erik lehnsherr know world magneto founder astonishing x-men last bastion hope world split asunder ravage eugenics war human mutant know exact ask homo superior comrade day ask die

For a real-world example with more tokens, I have selected some X-Men comics. The idea is that because they are about similar subject matters, we can expect some words to be used in multiple texts from which topics can be inferred. This new test corpus consists of the first 100 tokens (after stop word removal) from each of the following comic books that I more or less randomly pulled from my longbox/shelf: Astonishing X-Men #1 (1995) by Scott Lobdell, Ultimate X-Men #1 (2001) by Mark Millar, and Civil War: X-Men #1 (2006) by David Hine. All three comics open with captions or dialogue with relatively general remarks about the ‘mutant question’ (i.e. government action / legislation against mutants, human rights of mutants) and human-mutant relations, so that otherwise uncommon lemmata such as ‘mutant’, ‘human’ or ‘sentinel’ occur in all three of them. To increase the number of documents, I have split each 100-token batch into two parts at semantically meaningful points, e.g. when the text changes from captions to dialogue in AXM, or after the voice from the television is finished in CW:XM.

Page 6, panel 1 from UItimate X-Men #1 by Mark Millar and Adam Kubert. Tokens: good evening boaz eshelmen watch channel nine new update tonight top story trial run sentinel hail triumphant success mutant nest los angeles uncover neutralize civilian casualty

I then ran my LDA script (as described above) over these 6 documents with ~300 tokens, again with the assumption that there are 2 equally distributed topics (because I had carelessly hard-coded this number of topics in the script and now I’m too lazy to re-write it). This is the result after 1,000 iterations:

  • topic A: x-men (95%), sentinel (93%), sentinel (91%), story (91%), different (90%), …
  • topic B: day (89%), kitty (86%), die (86%), …

So topic A looks like the ‘mutant question’ issue with tokens like ‘x-men’ and two times ‘sentinel’, even though ‘mutant’ itself isn’t among the high-scoring tokens. Topic B, on the other hand, makes less sense (Kitty Pryde only appears in CW:XM, so that ‘kitty’ occurs in merely 2 of the 6 documents), and its highest percentages are also much lower than those in topic A. Maybe this means that there’s only one actual topic in this corpus.

Page 1, panel 5 from Civil War: X-Men #1 by David Hine and Yanick Paquette. Tokens: incessant rain hear thing preternatural acute hearing cat flea

Running Mallet over this corpus (2 topics, 10,000 iterations) yields an even less useful result. The first 5 words in each topic are:

  • topic 0: mutant, know, x-men, ask, cooper
  • topic 1: say, sentinel, morph, try, ready

(Valerie Cooper and Morph are characters that appear in only one comic, CW:XM and AXM, respectively.)

Topic 0 at least associates ‘x-men’ with ‘mutant’, but then again, ‘sentinel’ is assigned to the other topic. Thus neither topic can be related to an intuitively perceived theme in the comics. It’s clear how these topics were generated though: there’s only 1 document in which ‘sentinel’ doesn’t occur, the first half of the CW:XM excerpt, in which Valerie Cooper is interviewed on television. But ‘x-men’ and ‘mutant’ do occur in this document, the latter even twice, and also ‘know’ occurs more frequently (3 times) here than in other documents.

So the results from Mallet and maybe even my own Perl script seem to be correct, in the sense that the LDA algorithm has been properly performed and one can see from the results how the algorithm got there. But what’s the point of having ‘topics’ that can’t be matched to what we intuitively perceive as themes in a text?

The problem with our two example corpora here was, they were still not large enough for LDA to yield meaningful results. As with all statistical methods, LDA works better the larger the corpus. In fact, the idea of such methods is that they are best applied to amounts of text that are too large for a human to read. Therefore, LDA might be not that useful for disciplines (such as comics studies) in which it’s difficult to gather large text corpora in digital form. But do feel free to e.g. randomly download texts from Wikisource, and you’ll find that within them, LDA is able to successfully detect clusters of words that occur in semantically similar documents.


Politics in Warren Ellis’s Trees

Happy Labour Day! And welcome to the second blog post of what is now a series of posts on Warren Ellis and politics. (If you’re wondering why Ellis and why politics, read last year’s post here.) This time we’re going to look at the first couple of issues of Trees (Image 2014-2016, art by Jason Howard).

Trees is a science fiction story set in the near future. The comic starts as a collection of episodes that are only loosely connected through the ‘Trees’ phenomenon, extraterrestrial pillars that have landed on various places on earth. There are three settings that are visited repeatedly and extensively in the first few issues:

  • Cefalù, Sicily, Italy. This part of the story centers on Eligia Gatti, a young woman whose boyfriend Tito runs a neo-fascist gang. Tito sums up the situation: “Mafia to the south of us, ‘Ndrangheta to the north, the government collapsing, and us in the middle. Cefalu is ruined. Someone needs to take control of things.” (#2). This is the ‘strong man’ rhetoric once again: government has failed to protect society from crime, so a few individuals take matters into their own hands. Only this time, Tito’s gang merely seeks to replace organised crime by their own flavour of it, using mafia-like methods such as extortion. Furthermore, the gang members are clearly portrayed as villains, and as the story progresses, Eligia tries to break free from the fascists.
    However, Eligia’s emancipation is not achieved through a reinstatement of governmental power. Instead, she turns to another individual who stands outside the law (as evidenced by his gun-wielding), the enigmatic elderly Professor Luca Bongiorno. Thus Ellis doesn’t provide a proper solution to this case of government failure.
  • Spitsbergen, Norway. A group of young scientists from all over the world lives and works at an Arctic research facility. Due to the harsh climate, they live an isolated life removed from the rest of society. Ellis portrays this quasi-anarchy as a double-edged sword: on the one hand, the scientists are free to go about their work as they please without much supervision, and they don’t have to worry about food and housing. On the other hand, any possible conflicts are difficult to resolve because there is no impartial authority: when Sarah suggests to Marsh that he should return home, saying “I don’t think it’s even been legal for you to have been on station for two and a half years”, he answers, “So send someone up here to arrest me” (#2). Clearly, government has little power over the inhabitants of Blindhail Station. Marsh even implies that their life is a regression to barbarism: “What’s civilized? We live in bears-that-eat-people country” (#1).
  • Shu, China. This appears to be a fictional city which has formed around one of the Trees. Access to it is restricted, but once you’ve managed to get inside the city walls, it turns out to be an artist colony of utopian qualities. We see Shu through the eyes of Chenglei, a young artist from rural China (or, as a citizen of Shu puts it, “from Pigshit Village in scenic Incest Province”) who is overwhelmed by the freedom and permissive attitude he finds there. The Shu story arc is Ellis’s love letter to anarchy. Unhindered by government authorities, Chenglei is for the first time in his life able to explore his sexuality, while back in “Pigshit Village […] people are still beaten by their own families for being gay”, as Chenglei notes in a later issue (#6).

In all three scenarios, Ellis asks what happens when governmental power loosens and anarchy (in different degrees and different flavours) sets in. The overall picture he paints is ambiguous – he shows both the risks and the opportunities of anarchy – but this exploration of anarchy can also be read as a refusal of authoritarian forms of government: clearly, the future as Ellis imagines it does not lie in governmental law enforcement.

It should be noted that some of the other story arcs in Trees are more explicitly political, but they only become important in later issues.


Sakuga in Re:Zero

This is the first part of a series of blog posts celebrating 100 Years of Anime. (There is evidence of animated films produced in Japan before 1917, but 1917 the ‘official’ year of birth of anime.) Instead of emphasising that anime and manga are completely different media and whining about how fandom (and sometimes even scholarly discourse) around Japanese popular culture is dominated by anime at the expense of manga, The 650-Cent Plague is going to join in on the celebration and run a couple of posts on anime.

Granted, there are many similarities between anime and manga, but today we’ll look at an aspect that is specific to animation: sakuga. I haven’t seen this term in scholarly literature yet, but there are various fan/journalistic resources online (see e.g. this collection of links) that explain sakuga. These definitions are fuzzy and somewhat contradictory – for instance, some stress the importance of the authorship of individual outstanding key animators while others are based on the number of animated frames per second – but all agree that ‘sakuga’ basically means ‘scenes of extraordinary animation quality’ (as opposed to the overall animation quality of an anime series). ‘Animation quality’ is, of course, another fuzzy term (is it about the amount of labour, ingenuity, or aesthetic effect?), but let’s for once not overtheorise things and instead turn to some examples of what I feel might pass as sakuga.

For many people, Re:ゼロから始める異世界生活 / Re:Zero − Starting Life in Another World (White Fox, dir. Masaharu Watanabe) was the best anime series of 2016. I wouldn’t go as far as that, but it’s definitely an anime series that exemplifies the state of the art of contemporary animation quality, and as such should be a rich source of sakuga. For the following list I’ve picked one sakuga scene from each episode:

Episode 1A, 24:46-25:14 – Emilia summons the ‘lesser spirits’ which are essentially semitransparent blue spheres hovering in different directions, but they also illuminate both background and characters in blue light. Also notice how some of the spheres move in front of the characters and others behind them, so it’s not just one layer placed on top of the image.

Episode 1B, 18:25-18:47 – Rom swings his club against Elsa, accidentally destroying his own furniture along the way. Collapsing structures are a popular motif of sakuga (see the ‘debris’ tag at Sakugabooru).

Episode 2, 16:01-16:05 – The hut into which Subaru crashes is another beautifully collapsing item.

Episode 3, 16:28-17:03 – Reinhard’s impressive sword attack move starts with an effect similar to the ‘lesser spirits’ in Episode 1A followed by a quick succession of various other effects.

Episode 4, 23:04-23:12 – Beatrice’s hair contracts and expands like a coil spring.

Episode 5, 1:03-1:08 – Smooth animation of the shadows cast by the lattice windows.

Episode 6, 5:22-5:31 – Puck conjures a jet of water which acts as a semi-transparent layer that twists the background and characters behind it.

Episode 7, 18:54-19:36 – Another nice animation of Betty’s hair; this time her locks are being moved by the wind.

Episode 8, 19:43-20:38 – Not so much a matter of animation per se, but the subtle colouring of this scene beautifully evokes the lighting situation at sunset.

Episode 9, 10:09-11:16 – The fire of the torch and the braziers is both a moving semi-transparent layer and a source of lighting.

Episode 10, 19:37-19:43 – Rem’s chain moves like a chain should.

Episode 11, 0:46-0:50 – There are lots of magical special effects in this episode, the most impressive one right at the beginning while the credits still roll: rays of magical light followed by a kind of supernatural whirlwind.

Episode 12, 0:36-0:45 – This one might be hard to see on the still because it’s such a subtle effect: sunlight falls through the trees and creates a pattern of patches of light and shadow on the characters which moves as they walk.

Episode 13, 20:47-20:48 – In close-ups such as this one, Emilia’s forelock (and also Rem’s) turns into an opaque layer.

Episode 14, 22:11-22:13 – As the dragon cart brakes sharply, the ‘camera’ revolves slightly around it.

Episode 15, 21:28-23:16 – Many cool things happen in this episode, but it’s particularly famous for its long ending credits scene in which Subaru and Rem are slowly engulfed in an ever-intensifying snowstorm.

Episode 16, 3:57-3:59 – Subaru’s moving image is reflected in the steaming tea cup.

Episode 17, 3:05-3:12 – The rope Subaru holds on to splinters.

Episode 18, 4:12-4:15 – This episode has become legendary in itself due to the long and emotional dialogue between Subaru and Rem. Before that dialogue, however, there is a brief shot of a bowl of apples falling to the ground, in which the objects move not only downward but also ricochet off the floor in different directions, and they even rotate too.

Episode 19, 23:36-23:55 – The White Whale’s halo tilts along with it.

Episode 20, 5:16-5:19 – Another nicely animated shot of the White Whale turning in flight.

Episode 21, 19:42-19:52 – A backlighting effect employed three times in this episode: the rays of the sun seamlessly connect background to foreground.

Episode 22, 3:50-3:55 – The spatial arrangement of the characters requires them to be moved in three layers as the ‘camera’ revolves around their circle.

Episode 23, 13:28-13:33 – Explosions are a sakuga staple, and this one is a short but impressive combination of fire and smoke.

Episode 24, 8:03-8:07 – While the wagon moves diagonally to the left and away from the viewer, the lantern that is attached to it swings sideways.

Episode 25, 8:02-8:13 – Otto’s wagon is drawn by both a two-legged and a four-legged creature.

So these are the sakuga scenes I found most impressive in each episode. Did I miss one? Tell me in the comments. Check out the Re:Zero sakuga at Sakugabooru too.


Linda Hutcheon’s Postmodernism – in comics?

There have already been five posts about postmodernism on this weblog, so why a sixth one? Linda Hutcheon’s 1988 book A Poetics of Postmodernism: History, Theory, Fiction is interesting because it directly engages in a dialogue – or should I say, argument – with previous texts on postmodernism such as Fredric Jameson’s.

Hutcheon defines postmodernism as:

  • “fundamentally contradictory”,
  • “resolutely historical”, and
  • “inescapably political” (p. 4, my emphasis).

This seems to contradict Jameson’s and other authors’ view of postmodernism as ahistorical and depthless. But what exactly does Hutcheon mean by ‘historical’ and ‘political’?

The treatment of the past in postmodern works is indeed different from earlier, modernist works. Postmodernism “suggests no search for transcendent timeless meaning, but rather a re-evaluation of and a dialogue with the past in the light of the present. […] It does not deny the existence of the past; it does question whether we can ever know that past other than through its textualized remains.” (pp. 19-20, emphasis LH).

Likewise, the political nature of postmodernism is a complex one, “a curious mixture of the complicitous and the critical” (p. 201). “The basic postmodernist stance [is] a questioning of authority” (p. 202), but at the same time it is also “suspicious of ‘heroes, crusades, and easy idealism’ […]” (p. 203, quoting Bill Buford). “The postmodern is ironic, distanced” (p. 203).

The contradictory nature of postmodernism, on the other hand, is something everyone can agree on. This characteristic seems to be more of a prerequisite for or superordinate concept of the other two.

Hutcheon’s idea of postmodernism is a relatively narrow one. Although she references many examples of postmodernist works (mainly novels), it becomes clear that those examples represent only a part, and probably not a large one at that, of contemporary cultural production. Which brings us to today’s comic, which is not quite as randomly selected as previous examples in this column: it might fit Hutcheon’s criteria (well, see below), but some other comics that have a more ‘postmodern’ feel to them might not.

Brahm Revel’s Guerillas vol. 1 (Oni Press, 2010) opens with a quotation attributed to French Prime Minister Georges Clemenceau (1841–1929). The first words of the comic proper are in a caption box that says, “Vietnam, 1970.” For the next 50 pages, the story follows John Francis Clayton, an “FNG” (Fucking New Guy) in a military unit in the Vietnam War. Revel pays a lot of attention to detail, such as military equipment and jargon. There are references to historic figures like Richard Nixon or Jane Goodall. And the depicted events are typical of what is commonly known about the Vietnam War: U.S. soldiers raping native women, torching villages, falling victim to the Viet Cong’s guerilla tactics, etc.

All of this serves to create a sense of historical accuracy. While the story narrated by Clayton can with some certainty be identified as fictional, the events just might have happened as depicted, in Vietnam, in 1970.

Then there’s a rupture around p. 56, at the end of the first chapter, when the chimpanzees are introduced, a rogue squad of trained apes equipped and dressed as U.S. soldiers, who fight against the Viet Cong on their own. Chapter 2 tells their origin as an experiment conducted by scientists (of German descent, of course). The chimpanzees exhibit a mix of human and animal behaviour; they thump their chests but smoke cigarettes.

This appears to be the contradiction that is central to Guerillas: the outlandish, ‘unrealistic’ motif of the scientifically enhanced apes clashes with an historically accurate, ‘realistic’ setting. While the beginning of this comic might be read as Revel’s version of what really happened in Vietnam, the story of the chimpanzees can hardly be interpreted this way: here we’re clearly in the realm of fiction, or entertainment, or fantasy. Of course, earlier fantasy and science fiction stories have used similar setups (e.g. Bram Stoker’s Dracula). However, the main difference is that in those classic stories, the authors went to great lengths to make the improbable seem plausible and fit into the realistic setting, whereas it’s harder to suspend one’s disbelief when reading Guerillas (not least because we’re reading it with the experience of many of those older similar stories).

According to Hutcheon, such a treatment of the past tells us something about the present, and this is also where the political nature of the work comes from. It is unreasonable to assume that the depiction of the grimness of the Vietnam War is a protest against, reassessment of, or coming-to-terms with it, given that the comic was made over 30 years after the end of the war. The ostensible reason for the Vietnam setting is that it makes more sense to deploy chimpanzee soldiers in the Vietnamese jungle than e.g. in the desert of the Gulf Wars, or in WWII in which the U.S. experience of the tropical regions was dominated by naval and aerial warfare (The Thin Red Line perhaps being the exception that proves the rule). But maybe Guerillas isn’t so time-specific after all. One of its themes is that a man learns from animals what humanity truly is, and this is a message that is relevant regardless of time and place: not unlike Pride of Baghdad by Vaughan and Henrichon, Guerillas can also be read as a commentary on the dehumanising effects of the war in Iraq, and by extension also Afghanistan and any other armed conflict.

But wouldn’t this – i.e. extrapolating from the specific to the universal – be a rather modernist reading? Indeed, Guerillas doesn’t seem to be the ideal example of Hutcheon’s postmodernism, but then again, few comics would meet her criteria without reservation. Still, Guerillas comes close. One can easily imagine how it might have qualified if Revel had made some different choices, e.g. if the protagonist would have been made identifiable as a real person (thus creating a contradiction between the genres of biography and fiction, cf. Hutcheon p. 9), or if the chimpanzee experiment would have been based on more advanced science and technology (thus creating a contradiction between different time layers, cf. Hutcheon p. 5). The resulting work would have been postmodern in Hutcheon’s sense, but whether it would have been a better comic is another question.


Review, Jirō Taniguchi memorial edition: Chichi no Koyomi

One blogpost is not enough to pay homage to the recently deceased Jirō Taniguchi, so here’s another one.

Another noteworthy but largely overlooked manga by Taniguchi is Chichi no Koyomi (My Father’s Journal), of which there is no English translation either. The reason for its negligence in the Western world is probably a different one, though: it might be too similar to Taniguchi’s magnum opus A Distant Neighborhood – which was originally published four years *after* Chichi no Koyomi. Reading these two manga in the ‘wrong’ order makes Chichi no Koyomi feel like a compressed, less daring (no supernatural time travel) and more episodic (thus somewhat haphazard) rip-off of A Distant Neighborhood, when in fact the latter was more of a logical continuation or evolution out of the former.

Die Sicht der Dinge (父の暦 / Chichi no Koyomi)
Language: German (translated from Japanese)
Author: Jirō Taniguchi
Publisher: Carlsen (originally Shōgakukan)
Year: 2008 (original run 1994)
Pages: 278
Price: € 16,90
Website: https://www.carlsen.de/softcover/die-sicht-der-dinge/20582 (German)
ISBN: 978-3-551-77731-7

Yōichi Yamashita (i.e. not Taniguchi himself but an autobiographically influenced fictitious character) is a middle-aged salaryman who lives in Tokyo with his wife. When his father dies, he needs to return to his native Tottori for the funeral, for the first time in 15 years. There he meets his uncle, his sister and other characters with whom he reminisces about his father’s life, Yōichi’s own childhood and how the rift between the two came to be.

The events in the past are shown as flashback sequences, although they take up more space than the events in the present. I wouldn’t call the present-day sequences a framing narrative, though, because several chapters begin in the past, then switch to the present, before they switch back to the past again, so that the past frames the present. There is some structural variation and jumping back and forth in time. The most strikingly structured episode is the one in which seven-year-old Yōichi runs away from home to his uncle in search of his mother: adult Yōichi begins to tell this episode on pp. 19-25, but doesn’t pick it up again until 130 pages later.

Another interesting device, albeit employed only tentatively, is an unreliable narrator: two events from Yōichi’s childhood are first shown as he remembers them, but later he learns from his relatives how he actually misremembered them. This device makes the story more dynamic; just as in A Distant Neighborhood, the past isn’t fixed but changeable. However, there is also an emphasis on a historic event in Chichi no Koyomi, the Great Fire of Tottori in 1952, which makes the past more site- and time-specific in this manga than in A Distant Neighborhood.

Artistically, Chichi no Koyomi is Taniguchi at the top of his game. Particularly the characters and their facial expressions are spot-on, which is no small feat given the number of characters, most of which appear multiple times at different ages.

However, it should be noted that the German publisher Carlsen didn’t do a particularly good job at flipping the manga so that it now reads left-to-right in this German edition: the speech bubbles and captions are often arranged diagonally in the panel, in which case the reading order is bottom(!)-left to top-right, which is awfully confusing. Furthermore, some panels are mirrored and some are not, resulting in the old problems of right-handed characters becoming left-handed and the like.

That being said, Chichi no Koyomi is a classic Taniguchi manga that one shouldn’t miss. Together with The Walking Man and A Distant Neighborhood, this manga embodies the essence of Taniguchi’s work as a mangaka.

Rating: ● ● ● ● ○