Category Archives: software

Convert docs with OS X terminal

I’m teaching a workshop on Japanese text mining this week and am getting all kinds of interesting practical questions that I don’t know the answer to. Today, I was asked if it’s possible to batch convert .docx files to .txt in Windows.

I don’t know Windows, but I do know Mac OS, so I discovered that one can use textutil in the terminal to do this. Just run this line to convert .docx -> .txt:

textutil -convert txt /path/to/DOCX/files/*.docx

You can convert to a bunch of different formats, including txt, html, rtf, rtfd, doc, docx, wordml, odt, or webarchive. It puts the files in the same directory as the source files. That’s it: enjoy!

* Note: This worked fine with UTF-8 files using Japanese, so I assume it just works with UTF-8 in general. YMMV.

Taiyō project: first steps with data

As I begin working on my project involving Taiyō magazine, I thought I’d document what I’m doing so others can see the process of cleaning the data I’ve gotten, and then experimenting with it. This is the first part in that series: first steps with data, cleaning it, and getting it ready for analysis. If I have the Taiyō data in “plain text,” what’s there to clean? Oh, you have no idea.

taiyo_data Continue reading Taiyō project: first steps with data

website to jekyll

While my research diary has stalled out because I haven’t been researching (other than some administrative tasks like collecting and organizing article PDFs, and typing notes into Mendeley), I have made some progress on updating my website.

Specifically, I have switched over to using Jekyll, which is software that converts markdown/HTML and SASS/CSS to static web pages. Why do I want to do it? Because I want to have a consistent header and footer (navigation and that blurb at the bottom of every page) across the whole site, but don’t want to manually edit every single file every time I update one of those, or update the site structure/design. I also didn’t want to use PHP because then all my files will be .php and on top of it, it feels messier. I like static HTML a lot.

I’m just writing down my notes here for others who might want to use it too. I’ve only found tutorials that talk about how to publish your site to GitHub Pages. Obviously, I have my own hosting. I also already had a full static site coded in HTML and CSS, so I didn’t want to start all over again with markdown. (Markdown is just a different markup language from HTML; from what I can tell, you can’t get nearly the flexibility or semantic markup into your markup documents that you can with HTML, so I’m sticking with the latter.) I wondered: all these tutorials show you how to do it from scratch, but will it be difficult to convert an existing HTML/CSS site into a Jekyll-powered site?

The answer is: no. It’s really really easy. Just copy and paste from your old site into some broken-up files in the Jekyll directory, serve, and go.

I recommend following the beginning of this tutorial by Tania Rascia. This will help you get Jekyll installed and set up.

Then, if you want a website — not a blog — what you want to do is just start making “index.html”, “about.html”, folders with more .html files (or .md if you prefer), etc., in your Jekyll folder. These will all be generated as regular .html pages in the _site directory when you start the server, and will be updated as long as the server is running. It’ll all be structured how you set it up in the Jekyll folder. For my site, that means I have folders like “projects” and “guides” in addition to top-level pages (such as “index.html”).

Finally, start your server and generate all those static pages. Put your CSS file wherever the head element wants it to be on your web server. (I have to use its full URL, starting with http://, because I have multiple folders and if I just put “mollydesjardin.css” the non-top-level files will not know where to find it.) Then upload all the files from _site into your server and voilà, you have your static website.

I do not “get” Git enough yet to follow some more complicated instructions I found for automatically pushing my site to my hosting. What I’m doing, and is probably the simplest but just a little cumbersome solution, is to just manually SFTP those files to my web server as I modify them. Obviously, I do not have to upload and overwrite every file every time; I just select the ones I created or modified from the _site directory and upload those.

Hope this is helpful for someone starting out with Jekyll, converting an existing HTML/CSS site.

Pre-processing Japanese literature for text analysis

I recently wrote a small script to perform a couple of functions for pre-processing Aozora Bunko texts (text files of public domain, modern Japanese literature and non-fiction) to be used with Western-oriented text analysis tools, such as Voyant, other TAPoR tools, and MALLET. Whereas Japanese text analysis software focuses largely on linguistics (tagging parts of speech, lemmatizing, etc.), Western tools open up possibilities for visualization, concordances, topic modeling, and other various modes of analysis.

Why do these Aozora texts need to be processed? Well, a couple of issues.

  1. They contain ruby, which are basically glosses of Chinese characters that give their pronunciation. These can be straightforward pronunciation help, or actually different words that give added meaning and context. While I have my issues with removing ruby, it’s impossible to do straightforward tool-based analysis without removing it, and many people who want to do this kind of analysis want it to be removed.
  2. The Aozora files are not exactly plain text: they’re HTML. The HTML tags and Aozora metadata (telling where the text came from, for example) need to be removed before analysis can be performed.
  3. There are no spaces between words in Japanese, but Western text analysis tools identify words by looking at where there are spaces. Without inserting spaces, it looks like each line is one big word. So I needed to insert spaces between the Japanese words.

How did I do it? My approach, because of my background and expertise, was to create a Python script that used a couple of helpful libraries, including BeautifulSoup for ruby removal based on HTML tags, and TinySegmenter for inserting spaces between words. My script requires you to have these packages installed, but it’s not a big deal to do so. You then run the script in a command line prompt. The way it works is to look for all .html files in a directory, load them and run the pre-processing, then output each processed file with the same filename, .txt ending, a plain text UTF-8 encoded file.

The first step in the script is to remove the ruby. Helpfully, the ruby is contained in several HTML tags. I had BeautifulSoup traverse the file and remove all elements contained within these tags; it removes both the tags and content.

Next, I used a very simple regular expression to remove everything in brackets – i.e. the HTML tags. This is kind of quick and dirty, and won’t work on every file in the universe, but in Aozora texts everything inside a bracket is an HTML tag, so it’s not a problem here.

Finally, I used TinySegmenter on the resulting HTML-free text to split the text into words. Luckily for me, it returns an array of words – basically, each word is a separate element in a list like [‘word1’, ‘word2’, … ‘wordn’] for n words. This makes my life easy for two reasons. First, I simply joined the array with a space between each word, creating one long string (the outputted text) with spaces between each element in the array (words). Second, it made it easy to just remove the part of the array that contains Aozora metadata before creating that string. Again, this is quick and dirty, but from examining the files I noted that the metadata always comes at the end of the file and begins with the word 底本 (‘source text’). Remove that word and everything after it, and then you have a metadata-free file.

Write this resulting text into a plain text file, and you have a non-ruby, non-HTML, metadata-free, whitespace-delimited Aozora text! Although you have to still download all the Aozora files individually and then do what you will with the resulting individual text files, it’s an easy way to pre-process this text and get it ready for tool-based (and also your-own-program-based) text analysis.

I plan to put the script on GitHub for your perusal and use (and of course modification) but for now, check it out on my Japanese Text Analysis research guide at Penn.

Japanese tokenization – tools and trials

I’ve been looking (okay, not looking, wishing) for a Japanese tokenizer for a while now, and today I decided to sit down and do some research into what’s out there. It didn’t take long – things have improved recently.

I found two tools quickly: kuromoji Japanese morphological analyzer and the U-Tokenizer CJK Tokenizer API.

First off – so what is tokenization? Basically, it’s separating sentences by words, or documents by sentences, or any text by some unit, to be able to chunk that text into parts and analyze them (or do other things with them). When you tokenize a document by word, like a web page, you enable searching: this is how Google finds individual words in documents. You can also find keywords from a document this way, by writing an algorithm to choose the most meaningful nouns, for example. It’s also the first step in more involved linguistic analysis like part-of-speech tagging (thing, marking individual words as nouns, verbs, and so on) and lemmatizing (paring words down to their stems, such as removing plural markers and un-conjugating verbs).

This gives you a taste of why tokenization is so fundamental and important for text analysis. It’s what lets you break up an otherwise unintelligible (to the computer) string of characters into units that the computer can attempt to analyze. It can index them, search them, categorize them, group them, visualize them, and so on. Without this, you’re stuck with “words” that are entire sentences or documents, that the computer thinks are individual units based on the fact that they’re one long string of characters.

Usually, the way you tokenize is to break up “words” based on spaces (or sentences based on punctuation rules, etc., although that doesn’t always work). (I put “words” in quotes because you can really make any kind of unit you want, the computer doesn’t understand what words are, and in the end it doesn’t matter. I’m using “words” as an example here.) However, for languages like Japanese and Chinese (and to a lesser extent Korean) that don’t use spaces to delimit all words (for example, in Korean particles are attached to nouns with no space in between, like saying “athome” instead of “at home”), you run into problems quickly. How to break up texts into words when there’s no easy way to distinguish between them?

The question of tokenizing Japanese may be a linguistic debate. I don’t know enough about linguistics to begin to participate in it, if it is. But I’ll quickly say that you can break up Japanese based on linguistic rules and dictionary rules – understanding which character compounds are nouns, which verb conjugations go with which verb stems (as opposed to being particles in between words), then breaking up common particles into their own units. This appears to be how these tools are doing it. For my own purposes, I’m not as interested in linguistic patterns as I am in noun and verb usage (the meaning rather than the kind) so linguistic nitpicking won’t be my area anyway.

Moving on to the tools. I put them through the wringer: Higuchi Ichiyō’s Ame no yoru, the first two lines, from Aozora bunko.

One, kuromoji, is the tokenizer behind Solr and Lucene. It does a fairly good job, although with Ichiyō’s uncommon word usage and conjugation, it faltered and couldn’t figure out that 高やか is one word; rather it divided it into 高 や か.  It gives the base form, reading, and pronunciation, but nothing else. However, in the version that ships with Solr/Lucene, it lemmatizes. Would that ever make me happy. (That’s, again, reducing a word to its base form, making it easy to count all instances of both “people” and “person” for example, if you’re just after meaning.) I would kill for this feature to be integrated with the below tool.

The other, U-Tokenizer, did significantly better, but its major drawback is that it’s done in the form of an HTTP request, meaning that you can’t put in entire documents (well, maybe you could? how much can you pass in an HTTP request?). If it were downloadable code with an API, I would be very happy (kuromoji is downloadable and has a command line interface). U-Tokenizer figured out that 高やか is one word, and also provides a list of “keywords,” which as far as I can tell is a bunch of salient nouns. I used it for a very short piece of text, so I can’t comment on how many keywords it would come up with for an entire document. The documentation on this is sparse, and it’s not open source, so it’s impossible to know what it’s doing. Still, it’s a fantastic tool, and also seems to work decently for Chinese and Korean.

Each of these tools has its strengths, and both are quite usable for modern and contemporary Japanese. (I really was cruel to feed them Ichiyō.) However, there is a major trial involved in using them with freely-available corpora like Aozora bunko. Guess what? Preprocessing ruby.

Aozora texts contain ruby marked up within the documents. I have my issues with stripping out ruby from documents that heavily use them (like Meiji writers, for example) because they add so much meaning to the text, but let’s say for argument’s sake that we’re not interested in the ruby. Now, it’s time to cut it all out. If I were a regular expressions wizard (or even had basic competency with them) I could probably strip this out easily, but it’s still time consuming. Download text, strip out ruby and other metadata, save as plain text. (Aozora texts are XHTML, NOT “plain text” as they’re often touted to be.) Repeat. For topic modeling using a tool like MALLET, you’re going to want to have hundreds of documents at the end of it. For example, you might be downloading all Meiji novels from Aozora and dividing them into chunks or chapters. Even the complete works of Natsume Sōseki aren’t enough without cutting them down into chapters or even paragraphs to make enough documents to use a topic modeling tool effectively. Possibly, run all these through a part-of-speech tagger like KH Coder. This is going to take a significant amount of time.

Then again, preprocessing is an essential and extremely time-consuming part of almost any text analysis project. I went through a moderate amount of work just removing Project Gutenberg metadata and dividing into chapters a set of travel narratives that I downloaded in plain text, thankfully not in HTML or XML. It made for easy processing. With something that’s not already real plain text, with a lot of metadata, and with a lot of ruby, it’s going to take much more time and effort, which is more typical of a project like this. The digital humanities are a lot of manual labor, despite the glamorous image and the idea that computers can do a lot of manual labor for us. They are a little finicky with what they’ll accept. (Granted, I’ll be using a computer script to strip out the XHTML and ruby tags, but it’s going to take work for me to write it in the first place.)

In conclusion? Text analysis, despite exciting available tools, is still hard and time consuming. There is a lot of potential here, but I also see myself going through some trials to get to the fun part, the experimentation. Still, stay tuned, especially for some follow-up posts on these tools and KH Coder as I become more familiar with them. And, I promise to stop being difficult and giving them Ichiyō’s Meiji-style bungo.

free software day in Cambridge 9/15

Hi everyone,

It’s Free Software Day in Cambridge, MA, this Saturday (9/15) and there is a day-long event happening in celebration, and to bring the community together. If you’re interested in attending, it’s located at Cambridge College (1000 Mass Ave) and starts at 10 AM.

http://www.fsf.org/blogs/community/celebrate-software-freedom-day-2012-in-cambridge-massachusetts

pseudonymity and the bibliography

My research is on authorship, and specifically on varied practices of writing and ways that authorship is performed.

For my study – that is, late 19th-century Japan – the practice of using pseudonyms, multiple and various, is extremely common. It’s an issue that I consider quite a bit, and a practice that I personally find simultaneously playful and liberating. It’s the ultimate in creativity: creating not just a work but one’s authorship, and one’s authorial name, every time.

This does raise a practical issue, however, that leads me to think even more about the meaning and implications of using a pseudonym.

How does one create a bibliography of works written under pen names?

The easy version of the problem is this: I have a choice when making my master dissertation bibliography of citing works in a number of ways. I can cite them with that instance’s pen name, then the most commonly known pen or given name in brackets afterward. I can do the reverse. Or I can be troublesome and only cite the pen name. Then again, I could adopt the practice that is the current default – born of now attributing works solely to the most commonly known name rather than to the name originally on the work – that is to not bother with the original pen name, obscuring the original publication context entirely. I can pretend, for example, that Maihime was written by Mori Ogai, and not Mori Rintaro. This flies in the face of convention but is the only way that I can cite the work and remain consistent with the overarching argument that I make in my dissertation: that is, use of and attribution to specific, variable pen names matters, both for understanding context and also understanding the work itself. It goes without saying that this is crucial for understanding authorship itself.

But there is another issue, and it goes hand-in-hand with citing works by writers whose name does not follow Western convention of given name first, last name second. Of having two names at all. The issue comes in the form of citation managers.

I’ve been giving Zotero a go lately and quite enjoying it. But I find myself making constant workarounds because of most of my sources being by Japanese writers, and the writers of my primary sources not only being Japanese but also using pen names. My workaround is to treat the entire name as one single last name, so I can write it in the proper order and not have it wrangled back into “last name”, “first name” – both of those being not quite true here. For citing a Japanese writer, I’d want to retain the last name then given name order; for someone using a pen name, the issue is that no part of the name is a last or given name. It’s what I’d like to call an acquired name.

Mori Ogai is now the most commonly used name of the writer Mori Rintaro (Mori being the last name, Rintaro being his given name). Ogai is a shortened version of his early pen name Ogai Gyoshi. Ogai Gyoshi isn’t a false last plus given name. It’s always in the order Ogai Gyoshi, neither of them is a “real” name, and it is a phrase, not a name. It’s as though he’s using a word that happens to have a space in it.

So when I put some of Mori Rintaro’s writing into Zotero, I put in “Mori Rintaro” as the last name. Sometimes I just put in “Ogai” as the name, when he signs a piece that way. Occasionally it’s “Ogai Mori Rintaro” (this is, in fact, the author of Maihime – I made a shortcut above in my example). And then there are some pieces in which the last name in Zotero is “Ogai Gyoshi.”

I don’t know how to go about this any other way, but it’s less about me having be a little hacky to get it to do what I want, and much more of a constant reminder of our current (Western) assumptions about names, authorship, and naming conventions. It’s a reminder of how different the time and place that I study is, and how much more dynamic and, frankly, fun it was to write in the late 19th century in Japan than it is now, either here as an American or even in Japan. Names are taken a bit more seriously now, I’d argue, and more literally. It’s a little harder to play with one’s name, to make one up entirely for a one-off use, at this point – and I think it’s for the worse.

(Obviously, there are exceptions: musicians come immediately to mind. And it’s not as though writers do not adopt pen names now. But it’s not in the same way. And this, incidentally, is something I love about the early Internet – I’m referring to the nineties in particular. Fun with handles, fun with names, all pseudonymous, and all about fluid, multiple identity.)

What I’m doing this summer at CDRH: overview

I’ve been here at CDRH (The Center for Digital Research in the Humanities) at the University of Nebraska-Lincoln since early May, and the time went by so quickly that I’m only writing about what I’m doing a few weeks before my internship ends! But I’m in the thick of things now, in terms of my main work, so this may be the perfect time to introduce it.

My job this summer is (mostly) to update TokenX for the Willa Cather Archive (you can find it from the “Text Analysis” link at http://cather.unl.edu). I’m updating it in two senses:

  1. Redesigning the basic interface. This means starting from scratch with a list of functions that TokenX performs, organizing them for user access, figuring out which categories will form the interface (and what to put under each), and then making a visual mockup of where all this will go.
  2. Taking this interface redesign to a new Web site for TokenX at the Cather Archive.* The site redesign mostly involves adapting the new interface for the Web. Concretely, I’m doing everything from the ground up with HTML5 and styles separated into CSS (and aiming for modularity, I’m using multiple stylesheets that style at different levels of functionality – for example, the color scheme, etc., is separated from the rest of the site to be modified or switched out easily). The goal is to avoid JavaScript completely, and I think we’re going to make it happen. We’re also aiming for text rather than images (for example, in menus) and keeping the site as basic and functional as possible. After all, this is an academic site and too much flashy will make people suspicious. 😀
  3. The exciting part: Implementing as much of TokenX with the new interface as I can in the time I’m here. Why is it exciting?
    • TokenX is written in XSLT, which tends to be mentioned in a cursory way as “stylesheets for XML” as though it’s like CSS. It’s not. It’s a functional programming language with syntax devised by a sadist. XSLT has a steep learning curve and I have had 9 weeks this summer to try my best to get over it before I leave CDRH. I’m doing my best and it’s going better than I ever imagined.
    • I’m also getting to learn how XSLT is often used to generate Web sites (which is what I’m doing): using Apache Cocoon. Another technology that I had no idea existed before this summer, and which is coming to feel surprisingly comfortable at this point.
    • I have never said so many times in a span of only two months, “I’m glad I had that four years of computer science curriculum in college. It’s paying off!” Given that I never went into software development after graduating, and haven’t done any non-trivial programming in quite a long time, I had mostly dismissed my education as something that could end up being so relevant to my work now. And honestly, I miss using it. This is awesome.

I’m getting toward the end of implementing all of the functionality of TokenX in XSLT for the new Web site, hooking that up with the XSLT that then generates the HTML that delivers it to the user. (To be more concrete for those interested in Cocoon, I’m writing a sitemap that first processes the requested XML file with the appropriate stylesheet for transformation results, then passing those results on to a series of formatting stylesheets that eventually produce the final HTML product.) And I’m about midway through the process of doing from Web prototype to basic site styling to more polished end result. I’ve got 2.5 weeks left now, and I’m on track to having it up and running pretty soon! I’ll keep you updated with comments about the process – both XSLT, and crafting the site with HTML5 and CSS – and maybe some screenshots.

* TokenX can be, and is, used for more than this collection at CDRH. Notably it’s used at the Walt Whitman Archive in basically the same way as Cather. But we have to start somewhere for now, and expand as we can later.

Comfortable Reading With Aozora Bunko

I’ve started a guide to reading software (and browser recommendations) for reading texts from the volunteer-led collection of Japanese e-texts, Aozora Bunko 青空文庫.

Aozora is a wonderful resource, but the problem for anyone who’s read much Japanese fiction is that the reading orientation – and correct display of furigana (ruby) – leaves a lot to be desired. Reading in the browser limits the reader to long-lined left-to-right orientation, when in paperback we’d all be reading vertical, short-line, right-to-left (and probably in bunkobon 文庫本 format!). While getting furigana to show up properly in the browser helps a lot, we still need software to reorient and resize the text – not to mention allow us to read texts on devices other than our computers.

My guide will always be a work in process, so please do offer links to other software or tools that you know about.

Please check out my guide to browsers, ruby, PC/Mac and mobile software:

A Reader’s Guide to Aozora Bunko / 青空文庫読者へのガイド