As I begin working on my project involving Taiyō magazine, I thought I’d document what I’m doing so others can see the process of cleaning the data I’ve gotten, and then experimenting with it. This is the first part in that series: first steps with data, cleaning it, and getting it ready for analysis. If I have the Taiyō data in “plain text,” what’s there to clean? Oh, you have no idea.
Tag Archives: linguistics
Taiyō magazine and nationhood
What am I working on these days? Well, one thing is working with the Taiyō magazine corpus (1895-1925, selected articles) from NINJAL, released on CD about 10 years ago but currently being prepared for web release. In addition, I should note that Taiyō has been reproduced digitally as a paid resource through JKBooks (on the JapanKnowledge+ platform).
Taiyō was a general-interest magazine spanning Meiji through Taishō periods in Japan, with articles on all topics as well as fiction, and innovative for its time in 1895 with the use of lithography to reproduce pages of photographs. (And let me tell you, they were random at the time: battleships, various nations’ viceroys, stuff like that. I’m not making this up.) Unfortunately, the text-only nature of my project doesn’t reflect the cool printing technology and visual nature of the magazine, but I was wondering, what can I do with just the text of the articles and metadata kindly provided by NINJAL (including genre by NDL classification and style of writing).
Because I’m working on another project (under wraps and in very beginning stages at the moment) involving periodicals in the Japanese empire, I was already thinking about this question. I hit upon something very basic but an important topic: what language did Japanese publications use to talk about Japan at the time? With “Japan” in the early 20th century, we can think of both a nation and an empire, with blurred and constantly shifting boundaries. Over the span of Taiyō‘s publication, Japan annexed both Korea and Taiwan, increased hostilities with China, and battled (and defeated) Russia in the Russo-Japanese War (thus gaining some territories there). There was a lot going on to keep Japan’s borders in flux, and make Japanese question the limits and definition of their “nation.”
Especially because of the discourse in the early 20th century of naichi 内地 (inner lands or “home islands”, referring to the archipelago of Japan we know today) and gaichi 外地 (outer lands or “colonies”, referring to Korea/Taiwan), which are both subsumed under the name of Japan, I’m really interested in how those terms were being used, other terms that might have been used as well, and what qualities and relationships were associated with them. How did Japanese define these areas and how did it change over time? While I can’t get in the minds of people in the imperial period, I can take a look at one of its most popular magazines, intended for a broad audience, to see at least the public, print discourse of the nation and empire.
How to work with it, though? That’s where I’m still just beginning. It’s a daunting project in some ways. For example, I am not a linguist, let alone a Japanese linguist. I haven’t specialized in this period in the past, so keywords for territories will take some research on my part (for example, there were multiple names for Taiwan at the time in addition to the gaichi reference). Moreover, the corpus is 1.2 GB in UTF-8 text (which I converted from sentence-tokenized XML to word-tokenized, non-tagged text). It breaks Voyant Server and Topic Modeling Tool on my machine with 12 GB RAM when attempting to analyze the whole thing at once. Of course, I could split it up, but then that raises another methodological question: how and why to split it up? What divisions should I use: years, genres, authors, etc.? Right now I have it in text files by article, but could combine those articles in any number of ways.
I am also stymied by methodologies for analysis, but my plan at the moment is to start by doing some basic visualizations of the articles, in different groupings, as an exploration of what kind of things people talked about in Taiyō over time. Are they even talking about the nation? When they talk about naichi what kinds of things do they associate with those territories, as opposed to gaichi? Is the distinction changing, and is it even a reliable distinction?
As a Price Lab Fellow this year at Penn, I hope to explore these questions and start to nail down what I want to analyze in more detail over time in Taiyō — and hopefully gain some insight into the language of empire in Japan 1895-1925.
In addition I’ll be presenting about this at a workshop at the University of Chicago in November, so if you’re in the area please attend and help me figure all this out!
Pre-processing Japanese literature for text analysis
I recently wrote a small script to perform a couple of functions for pre-processing Aozora Bunko texts (text files of public domain, modern Japanese literature and non-fiction) to be used with Western-oriented text analysis tools, such as Voyant, other TAPoR tools, and MALLET. Whereas Japanese text analysis software focuses largely on linguistics (tagging parts of speech, lemmatizing, etc.), Western tools open up possibilities for visualization, concordances, topic modeling, and other various modes of analysis.
Why do these Aozora texts need to be processed? Well, a couple of issues.
- They contain ruby, which are basically glosses of Chinese characters that give their pronunciation. These can be straightforward pronunciation help, or actually different words that give added meaning and context. While I have my issues with removing ruby, it’s impossible to do straightforward tool-based analysis without removing it, and many people who want to do this kind of analysis want it to be removed.
- The Aozora files are not exactly plain text: they’re HTML. The HTML tags and Aozora metadata (telling where the text came from, for example) need to be removed before analysis can be performed.
- There are no spaces between words in Japanese, but Western text analysis tools identify words by looking at where there are spaces. Without inserting spaces, it looks like each line is one big word. So I needed to insert spaces between the Japanese words.
How did I do it? My approach, because of my background and expertise, was to create a Python script that used a couple of helpful libraries, including BeautifulSoup for ruby removal based on HTML tags, and TinySegmenter for inserting spaces between words. My script requires you to have these packages installed, but it’s not a big deal to do so. You then run the script in a command line prompt. The way it works is to look for all .html files in a directory, load them and run the pre-processing, then output each processed file with the same filename, .txt ending, a plain text UTF-8 encoded file.
The first step in the script is to remove the ruby. Helpfully, the ruby is contained in several HTML tags. I had BeautifulSoup traverse the file and remove all elements contained within these tags; it removes both the tags and content.
Next, I used a very simple regular expression to remove everything in brackets – i.e. the HTML tags. This is kind of quick and dirty, and won’t work on every file in the universe, but in Aozora texts everything inside a bracket is an HTML tag, so it’s not a problem here.
Finally, I used TinySegmenter on the resulting HTML-free text to split the text into words. Luckily for me, it returns an array of words – basically, each word is a separate element in a list like [‘word1’, ‘word2’, … ‘wordn’] for n words. This makes my life easy for two reasons. First, I simply joined the array with a space between each word, creating one long string (the outputted text) with spaces between each element in the array (words). Second, it made it easy to just remove the part of the array that contains Aozora metadata before creating that string. Again, this is quick and dirty, but from examining the files I noted that the metadata always comes at the end of the file and begins with the word 底本 (‘source text’). Remove that word and everything after it, and then you have a metadata-free file.
Write this resulting text into a plain text file, and you have a non-ruby, non-HTML, metadata-free, whitespace-delimited Aozora text! Although you have to still download all the Aozora files individually and then do what you will with the resulting individual text files, it’s an easy way to pre-process this text and get it ready for tool-based (and also your-own-program-based) text analysis.
I plan to put the script on GitHub for your perusal and use (and of course modification) but for now, check it out on my Japanese Text Analysis research guide at Penn.
Japanese tokenization – tools and trials
I’ve been looking (okay, not looking, wishing) for a Japanese tokenizer for a while now, and today I decided to sit down and do some research into what’s out there. It didn’t take long – things have improved recently.
I found two tools quickly: kuromoji Japanese morphological analyzer and the U-Tokenizer CJK Tokenizer API.
First off – so what is tokenization? Basically, it’s separating sentences by words, or documents by sentences, or any text by some unit, to be able to chunk that text into parts and analyze them (or do other things with them). When you tokenize a document by word, like a web page, you enable searching: this is how Google finds individual words in documents. You can also find keywords from a document this way, by writing an algorithm to choose the most meaningful nouns, for example. It’s also the first step in more involved linguistic analysis like part-of-speech tagging (thing, marking individual words as nouns, verbs, and so on) and lemmatizing (paring words down to their stems, such as removing plural markers and un-conjugating verbs).
This gives you a taste of why tokenization is so fundamental and important for text analysis. It’s what lets you break up an otherwise unintelligible (to the computer) string of characters into units that the computer can attempt to analyze. It can index them, search them, categorize them, group them, visualize them, and so on. Without this, you’re stuck with “words” that are entire sentences or documents, that the computer thinks are individual units based on the fact that they’re one long string of characters.
Usually, the way you tokenize is to break up “words” based on spaces (or sentences based on punctuation rules, etc., although that doesn’t always work). (I put “words” in quotes because you can really make any kind of unit you want, the computer doesn’t understand what words are, and in the end it doesn’t matter. I’m using “words” as an example here.) However, for languages like Japanese and Chinese (and to a lesser extent Korean) that don’t use spaces to delimit all words (for example, in Korean particles are attached to nouns with no space in between, like saying “athome” instead of “at home”), you run into problems quickly. How to break up texts into words when there’s no easy way to distinguish between them?
The question of tokenizing Japanese may be a linguistic debate. I don’t know enough about linguistics to begin to participate in it, if it is. But I’ll quickly say that you can break up Japanese based on linguistic rules and dictionary rules – understanding which character compounds are nouns, which verb conjugations go with which verb stems (as opposed to being particles in between words), then breaking up common particles into their own units. This appears to be how these tools are doing it. For my own purposes, I’m not as interested in linguistic patterns as I am in noun and verb usage (the meaning rather than the kind) so linguistic nitpicking won’t be my area anyway.
Moving on to the tools. I put them through the wringer: Higuchi Ichiyō’s Ame no yoru, the first two lines, from Aozora bunko.
One, kuromoji, is the tokenizer behind Solr and Lucene. It does a fairly good job, although with Ichiyō’s uncommon word usage and conjugation, it faltered and couldn’t figure out that 高やか is one word; rather it divided it into 高 や か. It gives the base form, reading, and pronunciation, but nothing else. However, in the version that ships with Solr/Lucene, it lemmatizes. Would that ever make me happy. (That’s, again, reducing a word to its base form, making it easy to count all instances of both “people” and “person” for example, if you’re just after meaning.) I would kill for this feature to be integrated with the below tool.
The other, U-Tokenizer, did significantly better, but its major drawback is that it’s done in the form of an HTTP request, meaning that you can’t put in entire documents (well, maybe you could? how much can you pass in an HTTP request?). If it were downloadable code with an API, I would be very happy (kuromoji is downloadable and has a command line interface). U-Tokenizer figured out that 高やか is one word, and also provides a list of “keywords,” which as far as I can tell is a bunch of salient nouns. I used it for a very short piece of text, so I can’t comment on how many keywords it would come up with for an entire document. The documentation on this is sparse, and it’s not open source, so it’s impossible to know what it’s doing. Still, it’s a fantastic tool, and also seems to work decently for Chinese and Korean.
Each of these tools has its strengths, and both are quite usable for modern and contemporary Japanese. (I really was cruel to feed them Ichiyō.) However, there is a major trial involved in using them with freely-available corpora like Aozora bunko. Guess what? Preprocessing ruby.
Aozora texts contain ruby marked up within the documents. I have my issues with stripping out ruby from documents that heavily use them (like Meiji writers, for example) because they add so much meaning to the text, but let’s say for argument’s sake that we’re not interested in the ruby. Now, it’s time to cut it all out. If I were a regular expressions wizard (or even had basic competency with them) I could probably strip this out easily, but it’s still time consuming. Download text, strip out ruby and other metadata, save as plain text. (Aozora texts are XHTML, NOT “plain text” as they’re often touted to be.) Repeat. For topic modeling using a tool like MALLET, you’re going to want to have hundreds of documents at the end of it. For example, you might be downloading all Meiji novels from Aozora and dividing them into chunks or chapters. Even the complete works of Natsume Sōseki aren’t enough without cutting them down into chapters or even paragraphs to make enough documents to use a topic modeling tool effectively. Possibly, run all these through a part-of-speech tagger like KH Coder. This is going to take a significant amount of time.
Then again, preprocessing is an essential and extremely time-consuming part of almost any text analysis project. I went through a moderate amount of work just removing Project Gutenberg metadata and dividing into chapters a set of travel narratives that I downloaded in plain text, thankfully not in HTML or XML. It made for easy processing. With something that’s not already real plain text, with a lot of metadata, and with a lot of ruby, it’s going to take much more time and effort, which is more typical of a project like this. The digital humanities are a lot of manual labor, despite the glamorous image and the idea that computers can do a lot of manual labor for us. They are a little finicky with what they’ll accept. (Granted, I’ll be using a computer script to strip out the XHTML and ruby tags, but it’s going to take work for me to write it in the first place.)
In conclusion? Text analysis, despite exciting available tools, is still hard and time consuming. There is a lot of potential here, but I also see myself going through some trials to get to the fun part, the experimentation. Still, stay tuned, especially for some follow-up posts on these tools and KH Coder as I become more familiar with them. And, I promise to stop being difficult and giving them Ichiyō’s Meiji-style bungo.
my disagreement with authorship attribution
I’m torn: I’m very interested in stylometry, but I have issues with the fundamental questions that are asked in this field, in particular authorship attribution.
In my research, I’ve thought and written quite a bit about authorship. My dissertation looked at changing concepts of authorship – the singular, cohesive, Romantic genius author as established in collected editions in Japan at the turn of the 20th century – and also at actual practices of writing and authorship that preceded and accompanied these developments. My conclusion about authorship was that it is a kind of performance, embedded in and never preceding the text, and is not coextensive with the historical writer(s) behind the performance – pseudonymous, collective, anonymous, or otherwise.
These performances are necessarily contextualized by space, time, society, culture, literary trends, place of publication, audience. They are more or less without meaning if one doesn’t take context into account, even if not all relevant contexts at once. For a performance takes place within a historic, cultural, and literary moment, and does not exist independently of it. I see that place of performance as both the text and its place of publication, its material manifestation; and it is a performance that is inextricably linked to reader reception.
I also don’t see these performances as necessarily creating a unified authorial identity or unified author-function across space, time, and texts. This may sound extremely counterintuitive given that many performances of authorship share appellations and can be “attributed” to the “same” author, and I recognize that my argument is downright bizarre at times. I blame it on having spent too much time thinking about the implications of this topic. But in a way, our linking of these performances after the fact is artificial, and these different author-functions are, for me, so linked to the time and place of both publication and reading – whether contemporary or not – that they can be seen as separate as well. This is why I concluded that collected literary anthologies are constructing – inventing – an entirely artificial “author” out of the works associated, after the fact, with a historical, individual writer, whose identity and name may not have coincided with that of the authorial performance at all in the first place.
So, that said, let me get to my disagreement with authorship attribution. It’s fundamentally asking the wrong question of authors and authorship: who “really” wrote this text? My argument is that the hand of the historical writer “behind” the authorial performance is a moot point; what matters is the name, or lack of a name, attributed to the text when it is published, republished, read, and reread over time. It’s the performance that takes place at the site of the text, coinciding with and following the creation of the text, deeply associated with and embedded in the text, and located within reception rather than intention. It takes place at a different site than the hand of the historical writer holding a pen or the mind creating an idea. And so the search for the “real” identity of the author is beside the point; what is happening here is really “writership attribution” that is something separate from authorship.
A colleague recently asked me, too, what the greater goal of authorship attribution is – what is it beyond finding out the person behind the text? What is it besides deciding that the entity constructed with the name Shakespeare “really” wrote an unattributed or mysterious text? And I couldn’t answer this question, which brings me to my second fundamental problem with authorship attribution. I don’t see an overarching research question guiding methodology, besides the narrow goal of establishing writership of a text. This could be my own ignorance, and I’d be happy to be corrected on it.
I’m interested to hear your thoughts!