Tag Archives: journal

research diary go

binding

Lately, I feel like I’m stuck in short-term thinking. While I hear “be in the moment” is a good thing, I’m overly in the moment. I’m having a hard time thinking long-term and planning out projects, let alone sticking to any kind of plan. Not that I have one.

A review of my dissertation recently went online, and of course some reactions to my sharing that were “what have you published in journals?” and “are you turning it into a book?” I graduated three years ago, and the dissertation was finished six months prior to that and handed in. This summer, I’ll be looking at four years of being “done” without much to show for the intervening time.

Of course, it’s hard to show something when you have a full-time job that doesn’t include research as a professional component. But if I want to do it for myself — and I do — that means that I need to come up with a non-job way to motivate myself and stay on track.

That brings me to the title of this post. My mother recently had a “meeting with herself” at the end of the work week to check in on what she meant to do and what actually happened. It sounds remarkably productive to me as a way to keep yourself 1) kind of on track, and 2) in touch with your own habits and aspirations. It’s easy to lose touch with those things in the weekly grind.

I decided I will have a weekend meeting with myself every week, and as a part of that, write a narrative of what I did. I’ll write it before I review my list of aspirations for the previous week and then when I compare, not necessarily beat myself up over “not meeting goals” but rather use it as an opportunity to refine my aspirations based on how I actually work (or don’t). As a part of that — to hold myself accountable and also to start a dialogue with others — I’ll be writing a cleaned-up version of that research diary once a week here. Don’t expect detailed notes, but do expect a diary of my process and the kinds of activities I engage in when doing research and writing.

I hope this can be helpful to a beginning researcher and spark some conversation with more experienced ones. While this is a personal journey of a sort, it is public, and I welcome your comments.

#dayofDH Meiroku zasshi 明六雑誌 project

It’s come to my attention that Fukuzawa Yukichi’s (and others’) early Meiji (1868-1912) journal, Meiroku zasshi 明六雑誌, is available online not just as PDF (which I knew about) but also as a fully tagged XML corpus from NINJAL (and oh my god, it has lemmas). All right!

Screen Shot 2014-04-08 at 11.09.55 AM

I recently met up with Mark Ravina at Association for Asian Studies, who brought this to my attention, and we are doing a lot of brainstorming about what we can do with this as a proof-of-concept project, and then move on to other early Meiji documents. We have big ideas like training OCR to recognize the difference between the katakana and kanji 二, for example; Meiji documents generally break OCR for various reasons like this, because they’re so different from contemporary Japanese. It’s like asking Acrobat to handle a medieval manuscript, in some ways.

But to start, we want to run the contents of Meiroku zasshi through tools like MALLET and Voyant, just to see how they do with non-Western languages (don’t expect any problems, but we’ll see) and what we get out of it. I’d also be interested in going back to the Stanford Core NLP API and seeing what kind of linguistic analysis we can do there. (First, I have to think of a methodology.  :O)

In order to do this, we need whitespace-delimited text with words separated by spaces. I’ve written about this elsewhere, but to sum up, Japanese is not separated by spaces, so tools intended for Western languages think it’s all one big word. There are currently no easy ways I can find to do this splitting; I’m currently working on an application that both strips ruby from Aozora bunko texts AND splits words with a space, but it’s coming slowly. How to get this with Meiroku zasshi in a quick and dirty way that lets us just play with the data?

So today after work, I’m going to use Python’s eTree library for XML to take the contents of the word tags from the corpus and just spit them into a text file delimited by spaces. Quick and dirty! I’ve been meaning to do this for weeks, but since it’s a “day of DH,” I thought I’d use the opportunity to motivate myself. Then, we can play.

Exciting stuff, this corpus. Unfortunately most of NINJAL’s other amazing corpora are available only on CD-ROMs that work on old versions of Windows. Sigh. But I’ll work with what I’ve got.

So that’s your update from the world of Japanese text analysis.

New issue of D-Lib magazine

D-Lib magazine has just published their most recent issue, available at http://www.dlib.org

This looks to be a great issue, with a number of fascinating articles on dissertations and theses in institutional repositories, using Wikipedia to increase awareness of digital collections, MOOCs, and automatic ordering of items based on reading lists.

Please check it out! All articles are available in full-text on the site.