Latent Dirichlet Allocation (LDA) is one of the most popular algorithms for Topic Modeling, i.e. having a computer find out what a text is about. LDA is also perhaps easier to understand than the other popular Topic Modeling approach, (P)LSA, but even though there are two well-written blog posts that explain LDA (Edwin Chen’s and Ted Underwood’s) to non-mathematicians, it still took me quite some time to grasp LDA well enough to be able to code it in a Perl script (which I have made available on GitHub, in case anyone is interested). Of course, you can always simply use a software like Mallet that runs LDA over your documents and outputs the results, but if you want to know what LDA actually does, I suggest you read Edwin Chen’s and Ted Underwood’s blog posts first, and then, if you still feel you don’t really get LDA, come back here. OK?
Welcome back. Disclaimer: I’m not a mathematician and there’s still the possibility that I got it all wrong. That being said, let’s take a look at Edwin Chen’s first example again, and this time we’re going to calculate it through step by step:
- I like to eat broccoli and bananas.
- I ate a banana and spinach smoothie for breakfast.
- Chinchillas and kittens are cute.
- My sister adopted a kitten yesterday.
- Look at this cute hamster munching on a piece of broccoli.
We immediately see that these sentences are about either eating or pets or both, but even if we didn’t know about these two topics, we still have to make an assumption about the number of topics within our corpus of documents. Furthermore, we have to make an assumption how these topics are distributed over the corpus. (In real life LDA analyses, you’d run the algorithm multiple times with different parameters and then see which fit best.) For simplicity’s sake, let’s assume there are 2 topics, which we’ll call A and B, and they’re distributed evenly: half of the words in the corpus belong to topic A and the other half to topic B.
What exactly is a word, though? I found the use of this term confusing in both Chen’s and Underwood’s text, so instead I’ll speak of tokens and lemmata: the lemma ‘cute’ appears as 2 tokens in the corpus above. Before we apply the actual LDA algorithm, it makes sense to not only tokenise but also lemmatise our 5 example documents (i.e. sentences), and also to remove stop words such as pronouns and prepositions, which may result in something like this:
- like eat broccoli banana
- eat banana spinach smoothie breakfast
- chinchilla kitten cute
- sister adopt kitten yesterday
- look cute hamster munch piece broccoli
Now we randomly assign topics to tokens according to our assumptions (2 topics, 50:50 distribution). This may result in e.g. ‘cute’ getting assigned once to topic A and once to topic B. An initial random topic assignment may look like this:
- like -> A, eat -> B, broccoli -> A, banana -> B
- eat -> A, banana -> B, spinach -> A, smoothie -> B, breakfast -> A
- chinchilla -> B, kitten -> A, cute -> B
- sister -> A, adopt -> B, kitten -> A, yesterday -> B
- look -> A, cute -> B, hamster -> A, munch -> B, piece -> A, broccoli -> B
Clearly, this isn’t a satisfying result yet; words like ‘eat’ and ‘broccoli’ are assigned to multiple topics when they should belong to only one, etc. Ideally, all words connected to the topic of eating should be assigned to one topic and all words related to pets should belong to the other. Now the LDA algorithm goes through the documents to improve this initial topic assignment: it computes probabilities which topic each token should belong to, based on three criteria:
- Which topics are the other tokens in this document assigned to? Probably the document is about one single topic, so if all or most other tokens belong to topic A, then the token in question should most likely also get assigned to topic A.
- Which topics are the other tokens in *all* documents assigned to? Remember that we assume a 50:50 distribution of topics, so if the majority of tokens is assigned to topic A, the token in question should get assigned to topic B to establish an equilibrium.
- If there are multiple tokens of the same lemma: which topic is the majority of tokens of that lemma assigned to? If most instances of ‘eat’ belong to topic A, then the token in question probably also belongs to topic A.
The actual formulas to calculate the probabilities given by Chen and Underwood seem to differ a bit from each other, but instead of bothering you with a formula, I’ll simply describe how it works in the example (my understanding being closer to Chen’s formula, I think). Let’s start with the first token of the first document (although the order doesn’t matter), ‘like’, currently assigned to topic A.
Should ‘like’ belong to topic B instead? If ‘like’ belonged to topic B, 3 out of 4 tokens in this document would belong to the same topic, as opposed to 2:2 if we stay with topic A. On the other hand, changing ‘like’ to topic B would threaten the equilibrium of topics over all documents: topic B would consist of 12 tokens and topic A of only 10, as opposed to the perfect 11:11 equilibrium if ‘like’ remains in topic A. In this case, the former consideration outweighs the latter, as the two factors get multiplied: the probability for ‘change this token to topic B’ is 3/4 * 1/12 = 6%, whereas the probability for ‘stay with topic A’ is 2/4 * 1/11 = 4.5%. We can also convert these numbers to absolute percentages (so that they add up to 100%) and say: ‘like’ is 57% topic B and 43% topic A.
What are you supposed to do with these percentages? We’ll get there in a minute. Let’s first calculate them for the next token, ‘eat’, because it’s one of those interesting lemmata with multiple tokens in our corpus. Currently, ‘eat’ in the first document is assigned to topic B, but in the second document it’s assigned to topic A. The probability for ‘eat stays in topic B’ is the same as the same as for ‘like stays in topic A’ above: within this document, the ratio of ‘B’ tokens to ‘A’ tokens is 2:2, which gives us 2/4 or 0.5 for the first factor; ‘eat’ would be 1 out of 11 tokens that make up topic B across all documents, giving us 1/11 for the second factor. The probability for ‘change eat to topic A’ is much higher, though, because there is already another ‘eat’ token assigned to this topic in another document. The first factor is 3/4 again, but the second is 2/12, because out of the 12 tokens that would make up topic A if we changed this token to topic A, 2 tokens would be of the same lemma, ‘eat’. In percentages, this means: this first ‘eat’ token is 74% topic A and only 26% topic B.
In this way we can calculate probabilities for each token in the corpus. Then we randomly assign new topics to each token, only this time not on a 50:50 basis, but according to the percentages we’ve figured out before. So this time, it’s more likely that ‘like’ will end up in topic B, but there’s still a 43% chance it will get assigned to topic A again. The new distribution of topics might be slightly better than the first one, but depending on how lucky you were with the random assignment in the beginning, it’s still unlikely that all tokens pertaining to food are neatly put in one topic and the animal tokens in the other.
The solution is to iterate: repeat the process of probability calculations with the new topic assignments, then randomly assign new topics based on the latest probabilities, and so on. After a couple of thousand iterations, the probabilities should make more sense. Ideally, there should now be some tokens with high percentages for each topic, so that both topics are clearly defined.
Only with this example, it doesn’t work out. After 10,000 iterations, the LDA script I’ve written produces results like this:
- topic A: cute (88%), like (79%), chinchilla (77%), hamster (76%), …
- topic B: kitten (89%), sister (79%), adopt (79%), yesterday (79%), …
As you can see, words from the ‘animals’ category ended up in both topics, so this result is worthless. The result given by Mallet after 10,000 iterations is slightly better:
- topic 0: cute kitten broccoli munch hamster look yesterday sister chinchilla spinach
- topic 1: banana eat piece adopt breakfast smoothie like
Topic 0 is clearly the ‘animal’ topic here. Words like ‘broccoli’ and ‘much’ slipped in because they occur in the mixed-topic sentence, “Look at this cute hamster munching on a piece of broccoli”. No idea why ‘spinach’ is in there too though. It’s equally puzzling that ‘adopt’ somehow crept into topic 1, which otherwise can be identified as the ‘food’ topic.
The reason for this ostensible failure of the LDA algorithm is probably the small size of the test data set. The results become more convincing the greater the number of tokens per document.
For a real-world example with more tokens, I have selected some X-Men comics. The idea is that because they are about similar subject matters, we can expect some words to be used in multiple texts from which topics can be inferred. This new test corpus consists of the first 100 tokens (after stop word removal) from each of the following comic books that I more or less randomly pulled from my longbox/shelf: Astonishing X-Men #1 (1995) by Scott Lobdell, Ultimate X-Men #1 (2001) by Mark Millar, and Civil War: X-Men #1 (2006) by David Hine. All three comics open with captions or dialogue with relatively general remarks about the ‘mutant question’ (i.e. government action / legislation against mutants, human rights of mutants) and human-mutant relations, so that otherwise uncommon lemmata such as ‘mutant’, ‘human’ or ‘sentinel’ occur in all three of them. To increase the number of documents, I have split each 100-token batch into two parts at semantically meaningful points, e.g. when the text changes from captions to dialogue in AXM, or after the voice from the television is finished in CW:XM.
I then ran my LDA script (as described above) over these 6 documents with ~300 tokens, again with the assumption that there are 2 equally distributed topics (because I had carelessly hard-coded this number of topics in the script and now I’m too lazy to re-write it). This is the result after 1,000 iterations:
- topic A: x-men (95%), sentinel (93%), sentinel (91%), story (91%), different (90%), …
- topic B: day (89%), kitty (86%), die (86%), …
So topic A looks like the ‘mutant question’ issue with tokens like ‘x-men’ and two times ‘sentinel’, even though ‘mutant’ itself isn’t among the high-scoring tokens. Topic B, on the other hand, makes less sense (Kitty Pryde only appears in CW:XM, so that ‘kitty’ occurs in merely 2 of the 6 documents), and its highest percentages are also much lower than those in topic A. Maybe this means that there’s only one actual topic in this corpus.
Running Mallet over this corpus (2 topics, 10,000 iterations) yields an even less useful result. The first 5 words in each topic are:
- topic 0: mutant, know, x-men, ask, cooper
- topic 1: say, sentinel, morph, try, ready
(Valerie Cooper and Morph are characters that appear in only one comic, CW:XM and AXM, respectively.)
Topic 0 at least associates ‘x-men’ with ‘mutant’, but then again, ‘sentinel’ is assigned to the other topic. Thus neither topic can be related to an intuitively perceived theme in the comics. It’s clear how these topics were generated though: there’s only 1 document in which ‘sentinel’ doesn’t occur, the first half of the CW:XM excerpt, in which Valerie Cooper is interviewed on television. But ‘x-men’ and ‘mutant’ do occur in this document, the latter even twice, and also ‘know’ occurs more frequently (3 times) here than in other documents.
So the results from Mallet and maybe even my own Perl script seem to be correct, in the sense that the LDA algorithm has been properly performed and one can see from the results how the algorithm got there. But what’s the point of having ‘topics’ that can’t be matched to what we intuitively perceive as themes in a text?
The problem with our two example corpora here was, they were still not large enough for LDA to yield meaningful results. As with all statistical methods, LDA works better the larger the corpus. In fact, the idea of such methods is that they are best applied to amounts of text that are too large for a human to read. Therefore, LDA might be not that useful for disciplines (such as comics studies) in which it’s difficult to gather large text corpora in digital form. But do feel free to e.g. randomly download texts from Wikisource, and you’ll find that within them, LDA is able to successfully detect clusters of words that occur in semantically similar documents.
Happy Labour Day! And welcome to the second blog post of what is now a series of posts on Warren Ellis and politics. (If you’re wondering why Ellis and why politics, read last year’s post here.) This time we’re going to look at the first couple of issues of Trees (Image 2014-2016, art by Jason Howard).
Trees is a science fiction story set in the near future. The comic starts as a collection of episodes that are only loosely connected through the ‘Trees’ phenomenon, extraterrestrial pillars that have landed on various places on earth. There are three settings that are visited repeatedly and extensively in the first few issues:
- Cefalù, Sicily, Italy. This part of the story centers on Eligia Gatti, a young woman whose boyfriend Tito runs a neo-fascist gang. Tito sums up the situation: “Mafia to the south of us, ‘Ndrangheta to the north, the government collapsing, and us in the middle. Cefalu is ruined. Someone needs to take control of things.” (#2). This is the ‘strong man’ rhetoric once again: government has failed to protect society from crime, so a few individuals take matters into their own hands. Only this time, Tito’s gang merely seeks to replace organised crime by their own flavour of it, using mafia-like methods such as extortion. Furthermore, the gang members are clearly portrayed as villains, and as the story progresses, Eligia tries to break free from the fascists.
However, Eligia’s emancipation is not achieved through a reinstatement of governmental power. Instead, she turns to another individual who stands outside the law (as evidenced by his gun-wielding), the enigmatic elderly Professor Luca Bongiorno. Thus Ellis doesn’t provide a proper solution to this case of government failure.
- Spitsbergen, Norway. A group of young scientists from all over the world lives and works at an Arctic research facility. Due to the harsh climate, they live an isolated life removed from the rest of society. Ellis portrays this quasi-anarchy as a double-edged sword: on the one hand, the scientists are free to go about their work as they please without much supervision, and they don’t have to worry about food and housing. On the other hand, any possible conflicts are difficult to resolve because there is no impartial authority: when Sarah suggests to Marsh that he should return home, saying “I don’t think it’s even been legal for you to have been on station for two and a half years”, he answers, “So send someone up here to arrest me” (#2). Clearly, government has little power over the inhabitants of Blindhail Station. Marsh even implies that their life is a regression to barbarism: “What’s civilized? We live in bears-that-eat-people country” (#1).
- Shu, China. This appears to be a fictional city which has formed around one of the Trees. Access to it is restricted, but once you’ve managed to get inside the city walls, it turns out to be an artist colony of utopian qualities. We see Shu through the eyes of Chenglei, a young artist from rural China (or, as a citizen of Shu puts it, “from Pigshit Village in scenic Incest Province”) who is overwhelmed by the freedom and permissive attitude he finds there. The Shu story arc is Ellis’s love letter to anarchy. Unhindered by government authorities, Chenglei is for the first time in his life able to explore his sexuality, while back in “Pigshit Village […] people are still beaten by their own families for being gay”, as Chenglei notes in a later issue (#6).
In all three scenarios, Ellis asks what happens when governmental power loosens and anarchy (in different degrees and different flavours) sets in. The overall picture he paints is ambiguous – he shows both the risks and the opportunities of anarchy – but this exploration of anarchy can also be read as a refusal of authoritarian forms of government: clearly, the future as Ellis imagines it does not lie in governmental law enforcement.
It should be noted that some of the other story arcs in Trees are more explicitly political, but they only become important in later issues.
This is the first part of a series of blog posts celebrating 100 Years of Anime. (There is evidence of animated films produced in Japan before 1917, but 1917 the ‘official’ year of birth of anime.) Instead of emphasising that anime and manga are completely different media and whining about how fandom (and sometimes even scholarly discourse) around Japanese popular culture is dominated by anime at the expense of manga, The 650-Cent Plague is going to join in on the celebration and run a couple of posts on anime.
Granted, there are many similarities between anime and manga, but today we’ll look at an aspect that is specific to animation: sakuga. I haven’t seen this term in scholarly literature yet, but there are various fan/journalistic resources online (see e.g. this collection of links) that explain sakuga. These definitions are fuzzy and somewhat contradictory – for instance, some stress the importance of the authorship of individual outstanding key animators while others are based on the number of animated frames per second – but all agree that ‘sakuga’ basically means ‘scenes of extraordinary animation quality’ (as opposed to the overall animation quality of an anime series). ‘Animation quality’ is, of course, another fuzzy term (is it about the amount of labour, ingenuity, or aesthetic effect?), but let’s for once not overtheorise things and instead turn to some examples of what I feel might pass as sakuga.
For many people, Re:ゼロから始める異世界生活 / Re:Zero − Starting Life in Another World (White Fox, dir. Masaharu Watanabe) was the best anime series of 2016. I wouldn’t go as far as that, but it’s definitely an anime series that exemplifies the state of the art of contemporary animation quality, and as such should be a rich source of sakuga. For the following list I’ve picked one sakuga scene from each episode:
So these are the sakuga scenes I found most impressive in each episode. Did I miss one? Tell me in the comments. Check out the Re:Zero sakuga at Sakugabooru too.
There have already been five posts about postmodernism on this weblog, so why a sixth one? Linda Hutcheon’s 1988 book A Poetics of Postmodernism: History, Theory, Fiction is interesting because it directly engages in a dialogue – or should I say, argument – with previous texts on postmodernism such as Fredric Jameson’s.
Hutcheon defines postmodernism as:
- “fundamentally contradictory”,
- “resolutely historical”, and
- “inescapably political” (p. 4, my emphasis).
This seems to contradict Jameson’s and other authors’ view of postmodernism as ahistorical and depthless. But what exactly does Hutcheon mean by ‘historical’ and ‘political’?
The treatment of the past in postmodern works is indeed different from earlier, modernist works. Postmodernism “suggests no search for transcendent timeless meaning, but rather a re-evaluation of and a dialogue with the past in the light of the present. […] It does not deny the existence of the past; it does question whether we can ever know that past other than through its textualized remains.” (pp. 19-20, emphasis LH).
Likewise, the political nature of postmodernism is a complex one, “a curious mixture of the complicitous and the critical” (p. 201). “The basic postmodernist stance [is] a questioning of authority” (p. 202), but at the same time it is also “suspicious of ‘heroes, crusades, and easy idealism’ […]” (p. 203, quoting Bill Buford). “The postmodern is ironic, distanced” (p. 203).
The contradictory nature of postmodernism, on the other hand, is something everyone can agree on. This characteristic seems to be more of a prerequisite for or superordinate concept of the other two.
Hutcheon’s idea of postmodernism is a relatively narrow one. Although she references many examples of postmodernist works (mainly novels), it becomes clear that those examples represent only a part, and probably not a large one at that, of contemporary cultural production. Which brings us to today’s comic, which is not quite as randomly selected as previous examples in this column: it might fit Hutcheon’s criteria (well, see below), but some other comics that have a more ‘postmodern’ feel to them might not.
Brahm Revel’s Guerillas vol. 1 (Oni Press, 2010) opens with a quotation attributed to French Prime Minister Georges Clemenceau (1841–1929). The first words of the comic proper are in a caption box that says, “Vietnam, 1970.” For the next 50 pages, the story follows John Francis Clayton, an “FNG” (Fucking New Guy) in a military unit in the Vietnam War. Revel pays a lot of attention to detail, such as military equipment and jargon. There are references to historic figures like Richard Nixon or Jane Goodall. And the depicted events are typical of what is commonly known about the Vietnam War: U.S. soldiers raping native women, torching villages, falling victim to the Viet Cong’s guerilla tactics, etc.
All of this serves to create a sense of historical accuracy. While the story narrated by Clayton can with some certainty be identified as fictional, the events just might have happened as depicted, in Vietnam, in 1970.
Then there’s a rupture around p. 56, at the end of the first chapter, when the chimpanzees are introduced, a rogue squad of trained apes equipped and dressed as U.S. soldiers, who fight against the Viet Cong on their own. Chapter 2 tells their origin as an experiment conducted by scientists (of German descent, of course). The chimpanzees exhibit a mix of human and animal behaviour; they thump their chests but smoke cigarettes.
This appears to be the contradiction that is central to Guerillas: the outlandish, ‘unrealistic’ motif of the scientifically enhanced apes clashes with an historically accurate, ‘realistic’ setting. While the beginning of this comic might be read as Revel’s version of what really happened in Vietnam, the story of the chimpanzees can hardly be interpreted this way: here we’re clearly in the realm of fiction, or entertainment, or fantasy. Of course, earlier fantasy and science fiction stories have used similar setups (e.g. Bram Stoker’s Dracula). However, the main difference is that in those classic stories, the authors went to great lengths to make the improbable seem plausible and fit into the realistic setting, whereas it’s harder to suspend one’s disbelief when reading Guerillas (not least because we’re reading it with the experience of many of those older similar stories).
According to Hutcheon, such a treatment of the past tells us something about the present, and this is also where the political nature of the work comes from. It is unreasonable to assume that the depiction of the grimness of the Vietnam War is a protest against, reassessment of, or coming-to-terms with it, given that the comic was made over 30 years after the end of the war. The ostensible reason for the Vietnam setting is that it makes more sense to deploy chimpanzee soldiers in the Vietnamese jungle than e.g. in the desert of the Gulf Wars, or in WWII in which the U.S. experience of the tropical regions was dominated by naval and aerial warfare (The Thin Red Line perhaps being the exception that proves the rule). But maybe Guerillas isn’t so time-specific after all. One of its themes is that a man learns from animals what humanity truly is, and this is a message that is relevant regardless of time and place: not unlike Pride of Baghdad by Vaughan and Henrichon, Guerillas can also be read as a commentary on the dehumanising effects of the war in Iraq, and by extension also Afghanistan and any other armed conflict.
But wouldn’t this – i.e. extrapolating from the specific to the universal – be a rather modernist reading? Indeed, Guerillas doesn’t seem to be the ideal example of Hutcheon’s postmodernism, but then again, few comics would meet her criteria without reservation. Still, Guerillas comes close. One can easily imagine how it might have qualified if Revel had made some different choices, e.g. if the protagonist would have been made identifiable as a real person (thus creating a contradiction between the genres of biography and fiction, cf. Hutcheon p. 9), or if the chimpanzee experiment would have been based on more advanced science and technology (thus creating a contradiction between different time layers, cf. Hutcheon p. 5). The resulting work would have been postmodern in Hutcheon’s sense, but whether it would have been a better comic is another question.
One blogpost is not enough to pay homage to the recently deceased Jirō Taniguchi, so here’s another one.
Another noteworthy but largely overlooked manga by Taniguchi is Chichi no Koyomi (My Father’s Journal), of which there is no English translation either. The reason for its negligence in the Western world is probably a different one, though: it might be too similar to Taniguchi’s magnum opus A Distant Neighborhood – which was originally published four years *after* Chichi no Koyomi. Reading these two manga in the ‘wrong’ order makes Chichi no Koyomi feel like a compressed, less daring (no supernatural time travel) and more episodic (thus somewhat haphazard) rip-off of A Distant Neighborhood, when in fact the latter was more of a logical continuation or evolution out of the former.
Die Sicht der Dinge (父の暦 / Chichi no Koyomi)
Language: German (translated from Japanese)
Author: Jirō Taniguchi
Publisher: Carlsen (originally Shōgakukan)
Year: 2008 (original run 1994)
Price: € 16,90
Website: https://www.carlsen.de/softcover/die-sicht-der-dinge/20582 (German)
Yōichi Yamashita (i.e. not Taniguchi himself but an autobiographically influenced fictitious character) is a middle-aged salaryman who lives in Tokyo with his wife. When his father dies, he needs to return to his native Tottori for the funeral, for the first time in 15 years. There he meets his uncle, his sister and other characters with whom he reminisces about his father’s life, Yōichi’s own childhood and how the rift between the two came to be.
The events in the past are shown as flashback sequences, although they take up more space than the events in the present. I wouldn’t call the present-day sequences a framing narrative, though, because several chapters begin in the past, then switch to the present, before they switch back to the past again, so that the past frames the present. There is some structural variation and jumping back and forth in time. The most strikingly structured episode is the one in which seven-year-old Yōichi runs away from home to his uncle in search of his mother: adult Yōichi begins to tell this episode on pp. 19-25, but doesn’t pick it up again until 130 pages later.
Another interesting device, albeit employed only tentatively, is an unreliable narrator: two events from Yōichi’s childhood are first shown as he remembers them, but later he learns from his relatives how he actually misremembered them. This device makes the story more dynamic; just as in A Distant Neighborhood, the past isn’t fixed but changeable. However, there is also an emphasis on a historic event in Chichi no Koyomi, the Great Fire of Tottori in 1952, which makes the past more site- and time-specific in this manga than in A Distant Neighborhood.
Artistically, Chichi no Koyomi is Taniguchi at the top of his game. Particularly the characters and their facial expressions are spot-on, which is no small feat given the number of characters, most of which appear multiple times at different ages.
However, it should be noted that the German publisher Carlsen didn’t do a particularly good job at flipping the manga so that it now reads left-to-right in this German edition: the speech bubbles and captions are often arranged diagonally in the panel, in which case the reading order is bottom(!)-left to top-right, which is awfully confusing. Furthermore, some panels are mirrored and some are not, resulting in the old problems of right-handed characters becoming left-handed and the like.
That being said, Chichi no Koyomi is a classic Taniguchi manga that one shouldn’t miss. Together with The Walking Man and A Distant Neighborhood, this manga embodies the essence of Taniguchi’s work as a mangaka.
Rating: ● ● ● ● ○
Earlier this month, Jirō Taniguchi died of an undisclosed illness at the age of only 69. During a career that spanned almost five decades, he authored or co-authored a huge number of manga. However, outside of Japan, only a few of them have earned the recognition they deserve.
One of these oft-overlooked titles is Trouble Is My Business, written by Natsuo Sekikawa. Originally published from 1979–80 (not counting the sequel series), it is Taniguchi’s earliest work available in German. There are also French and Italian editions, but no English one yet as far as I know.
Trouble Is My Business (事件屋稼業 / Jikenya Kagyō)
Language: German (translated from Japanese)
Authors: Natsuo Sekikawa (writer), Jirō Taniguchi (artist)
Publisher: Schreiber & Leser (originally Futabasha)
Year: 2014 (original run 1979–1980)
Price: € 16,95
Website: http://www.schreiberundleser.de/index.php?main_page=index&cPath=33 (German)
Unlike in many of Taniguchi’s better-known manga, there is little to no autobiographical influence in Trouble Is My Business, except that the protagonist, Fukamachi, is of the same age as Sekikawa and Taniguchi, and lives in Tokyo too. Instead of some contemplative family story, this is a collection of almost straightforward ‘hardboiled’ detective cases which are only loosely connected through the character of Fukamachi and his trouble with his ex-wife and daughter.
Rather than the crime cases and their resolution, the real draw here is the subtle humour which is usually based on the hapless, amateurish, down-and-out, small-time detective protagonist and his interaction with other quirky characters. But let’s focus on Taniguchi’s contribution, the artwork. Because already back then, in his early thirties, he had achieved mastery in draughtsmanship.
That is not to say his style didn’t evolve after Trouble Is My Business. The most noticeable difference to his later works is that he didn’t use screen tone as extensively back then, usually relying on parallel hatching to indicate volume and shadows. This results in an overall darker tonality, which is fitting for the ‘noir-ish’ story. My guess is that the reason for this artistic evolution is rather mundane: perhaps Taniguchi wasn’t yet successful enough to be able to hire an assistant who could take over the time-consuming task of applying the screen tones.
Another difference is the frequent display of his skill at depicting technical objects such as vehicles, watercrafts, or firearms, whereas his (too overtly photo-referenced) cityscapes aren’t as impressive as in his later manga. Something Taniguchi excelled at, back then at least as much as in the 90s, is the portrayal of a vast range of different characters. Each of them has a realistic but distinct look (with the sole exception of the barkeeper at Los Lindos, who looks indistinguishable from Fukamachi).
Recommended for fans of the genre, or anyone who wants to discover a different side of Taniguchi.
Rating: ● ● ● ○ ○
In this second part of a two-part blog post (read part 1 here) I’ll review two more manga from 2016, the widely acclaimed A Silent Voice by Yoshitoki Ōima and the ‘dark horse’ Yona of the Dawn by Mizuho Kusanagi.
A Silent Voice (聲の形 / Koe no katachi) vol. 1
Language: German (translated from Japanese)
Author: Yoshitoki Ōima
Publisher: Egmont (originally Kōdansha)
Year: 2016 (originally 2013)
Number of volumes: 4 so far (completed with vol. 7 in Japan)
Price: € 7
This is it. This must be the best manga of 2016. While I can’t claim to have read all manga from last year, it’s inconceivable that another manga could be as good as A Silent Voice.
As with Orange, the synopsis didn’t sound that exciting though, which is usually given as something along the lines of ‘deaf girl is bullied by her new classmate but then they get to know each other better’. However, apart from the first 8 pages of a framing narrative, the girl (Nishimiya) doesn’t even appear until page 50. This gives us a lot of space to get acquainted with the compelling character of Shōya, a sixth-grader who (similarly to e.g. Bart Simpson) does evil things without really being evil. Everything he does is motivated by his desire to ‘defeat boredom’ by all means. It’s impossible not to like him when he exclaims, “I declare this day a triumph over boredom!”, and it’s understandable how he immediately sees his new classmate Nishimiya as a remedy for boredom and desperately tries to make use of her to this end.
They way Ōima crafts her story is simple but couldn’t be more effective. By contrasting Nishimiya’s ultimate kindness with Shōya’s ever-increasing meanness while at the same time evoking the reader’s sympathy with Shōya, we experience their conflict as a gut-wrenching lose–lose situation. It can’t get more emotionalising than this. And even though the manga goes on for 6 more volumes, it’s not even all that important whether Nishimiya will ever be able to forgive Shōya – the story as told in vol. 1 is already perfect in itself.
While the script would have been strong enough to work well even if it had been drawn by a lesser artist, the opposite is also true: Ōima could probably illustrate the proverbial phone book and it would still look good. The art of A Silent Voice is absolutely on par with the writing. Of particular ingenuity is the device of repeating panel compositions of certain scenes (Shōya and his mates hanging out in his room, Shōya getting told off by his teacher, Shōya talking at Nishimiya) – not copy-and-pasting but re-drawing them with myriad background details (the amount of which is incredible in many panels anyway) changed.
Rating: ● ● ● ● ●
Yona of the Dawn (暁のヨナ / Akatsuki no Yona) vol. 1
Language: German (translated from Japanese)
Author: Mizuho Kusanagi
Publisher: Tokyopop (originally Hakusensha)
Year: 2016 (originally 2009)
Number of volumes: 3 so far (22 in Japan)
Price: € 5
With vol. 1 released in both Germany and the US and vol. 20-22 in Japan last year, plus a popular anime adaptation the year before, I would have thought Yona to be the most talked-about manga of 2016. Instead, I found it on only one best-of-2016 list. Does that mean it’s not actually that good?
Yona is marketed as a fantasy story for the shōjo demographic, which is an interesting niche – although ‘fantasy’ might be somewhat misleading, as there are no supernatural elements (at least in vol. 1), so it’s more of an alternate history story in a vaguely medieval East Asian setting. This genre mix means that the manga has to deliver not only on drama and romance but also on ‘swordplay’. While the drama/romance part works out fine (could there be anything more dramatic than Yona’s father getting killed by the man she is in love with?), the few action scenes seem stiff, especially when compared to manga by masters who appear to feel more at home in the ‘samurai’ genre such as Sanpei Shirato, Gōseki Kojima, or Hiroaki Samura.
Another problem of this volume is its slow pace: at the end, Yona flees from her father’s murderer and embarks on a journey that will surely end in another dramatic confrontation with said killer. It’s palpable that this is the beginning of what will eventually become an epic and probably very exciting and good story – but in vol. 1, we’re simply not there yet.
Rating: ● ● ● ○ ○
To sum up, in my humble opinion, A Silent Voice is the best manga of the year 2016. However, there are several other strong ongoing series with which I have yet to catch up to their 2016 volumes, so maybe there’s going to be a third review post later this year.