Category Archives: project

Writing Process: NaNoWriMo and Me

I’ve been meaning to write about my writing process for quite a while now and am surprised, looking back through my blog archives, that I have not yet addressed it.

This post could alternately be titled “How NaNoWriMo Enabled Me to Write My Dissertation in Three and a Half Months” or “The Importance of NaNoWriMo for Academic Writing.” Or just “Do NaNoWriMo at Least Once, People.”

NaNoWriMo stands for “National Novel Writing Month” and has been going since the turn of the twenty-first century. I’ve done it myself since 2002, most years. No, I don’t have a published novel, and in fact I only finished two of them in that time. (And the first one didn’t even “win” — the only criterion for winning is having a file containing 50,000 words — because it came in about 40,000 words when it was done. Oh well. My best and first finished work, so I’m cool with it. In fact, I’m still working on revising that work and trying to cut a version of it into a 10,000-word short story.) But man, what I got out of it.

NaNoWriMo taught me how to write. I don’t mean how to write well, or grammar or mechanics or plot or anything like that. It taught me how to put words on the page. And, after all, that is the first step to writing something. You have to just start making words. Continue reading Writing Process: NaNoWriMo and Me

Taiyō project: first steps with data

As I begin working on my project involving Taiyō magazine, I thought I’d document what I’m doing so others can see the process of cleaning the data I’ve gotten, and then experimenting with it. This is the first part in that series: first steps with data, cleaning it, and getting it ready for analysis. If I have the Taiyō data in “plain text,” what’s there to clean? Oh, you have no idea.

taiyo_data Continue reading Taiyō project: first steps with data

website to jekyll

While my research diary has stalled out because I haven’t been researching (other than some administrative tasks like collecting and organizing article PDFs, and typing notes into Mendeley), I have made some progress on updating my website.

Specifically, I have switched over to using Jekyll, which is software that converts markdown/HTML and SASS/CSS to static web pages. Why do I want to do it? Because I want to have a consistent header and footer (navigation and that blurb at the bottom of every page) across the whole site, but don’t want to manually edit every single file every time I update one of those, or update the site structure/design. I also didn’t want to use PHP because then all my files will be .php and on top of it, it feels messier. I like static HTML a lot.

I’m just writing down my notes here for others who might want to use it too. I’ve only found tutorials that talk about how to publish your site to GitHub Pages. Obviously, I have my own hosting. I also already had a full static site coded in HTML and CSS, so I didn’t want to start all over again with markdown. (Markdown is just a different markup language from HTML; from what I can tell, you can’t get nearly the flexibility or semantic markup into your markup documents that you can with HTML, so I’m sticking with the latter.) I wondered: all these tutorials show you how to do it from scratch, but will it be difficult to convert an existing HTML/CSS site into a Jekyll-powered site?

The answer is: no. It’s really really easy. Just copy and paste from your old site into some broken-up files in the Jekyll directory, serve, and go.

I recommend following the beginning of this tutorial by Tania Rascia. This will help you get Jekyll installed and set up.

Then, if you want a website — not a blog — what you want to do is just start making “index.html”, “about.html”, folders with more .html files (or .md if you prefer), etc., in your Jekyll folder. These will all be generated as regular .html pages in the _site directory when you start the server, and will be updated as long as the server is running. It’ll all be structured how you set it up in the Jekyll folder. For my site, that means I have folders like “projects” and “guides” in addition to top-level pages (such as “index.html”).

Finally, start your server and generate all those static pages. Put your CSS file wherever the head element wants it to be on your web server. (I have to use its full URL, starting with http://, because I have multiple folders and if I just put “mollydesjardin.css” the non-top-level files will not know where to find it.) Then upload all the files from _site into your server and voilà, you have your static website.

I do not “get” Git enough yet to follow some more complicated instructions I found for automatically pushing my site to my hosting. What I’m doing, and is probably the simplest but just a little cumbersome solution, is to just manually SFTP those files to my web server as I modify them. Obviously, I do not have to upload and overwrite every file every time; I just select the ones I created or modified from the _site directory and upload those.

Hope this is helpful for someone starting out with Jekyll, converting an existing HTML/CSS site.

research diary go

binding

Lately, I feel like I’m stuck in short-term thinking. While I hear “be in the moment” is a good thing, I’m overly in the moment. I’m having a hard time thinking long-term and planning out projects, let alone sticking to any kind of plan. Not that I have one.

A review of my dissertation recently went online, and of course some reactions to my sharing that were “what have you published in journals?” and “are you turning it into a book?” I graduated three years ago, and the dissertation was finished six months prior to that and handed in. This summer, I’ll be looking at four years of being “done” without much to show for the intervening time.

Of course, it’s hard to show something when you have a full-time job that doesn’t include research as a professional component. But if I want to do it for myself — and I do — that means that I need to come up with a non-job way to motivate myself and stay on track.

That brings me to the title of this post. My mother recently had a “meeting with herself” at the end of the work week to check in on what she meant to do and what actually happened. It sounds remarkably productive to me as a way to keep yourself 1) kind of on track, and 2) in touch with your own habits and aspirations. It’s easy to lose touch with those things in the weekly grind.

I decided I will have a weekend meeting with myself every week, and as a part of that, write a narrative of what I did. I’ll write it before I review my list of aspirations for the previous week and then when I compare, not necessarily beat myself up over “not meeting goals” but rather use it as an opportunity to refine my aspirations based on how I actually work (or don’t). As a part of that — to hold myself accountable and also to start a dialogue with others — I’ll be writing a cleaned-up version of that research diary once a week here. Don’t expect detailed notes, but do expect a diary of my process and the kinds of activities I engage in when doing research and writing.

I hope this can be helpful to a beginning researcher and spark some conversation with more experienced ones. While this is a personal journey of a sort, it is public, and I welcome your comments.

thinking about ‘sentiment analysis’

I just got off the phone with a researcher this morning who is interested in looking at sentiment analysis on a corpus of fiction, specifically by having some native speakers of Japanese (I think) tag adjectives as positive or negative, then look at the overall shape of the corpus with those tags in mind.

A while back, I wrote a paper about geoparsing and sentiment analysis for a class, describing a project I worked on. Talking to this researcher made me think back to this project – which I’m actually currently trying to rewrite in Python and then make work on some Japanese, rather than Victorian English, texts – and my own definition of sentiment analysis for humanistic inquiry.*

How is my definition of sentiment analysis different? How about I start with the methodology? What I did was look for salient adjectives, which I searched for by looking at most “salient” nouns (not necessarily the most frequent, but I need to refine my heuristics) and then the adjectives that appeared next to them. I also used Wordnet to look for words related to these adjectives and nouns to expand my search beyond just those specific words to ones with similar meaning that I might have missed (in particular, I looked at hypernyms (broader terms) and synonyms of nouns, and synonyms of adjectives).

My method of sentiment analysis ends up looking more like automatic summarization than a positive-negative sentiment analysis we more frequently encounter, even in humanistic work such as Matt Jockers’s recent research. I argue, of course, that my method is somewhat more meaningful. I consider all adjectives to be sentiment words, because they carry subjective judgment (even something that’s kind of green might be described by someone else as also kind of blue). And I’m more interested in the character of subjective judgment than whether it should be able to be considered ‘objectively’ as positive or negative (something I don’t think is really possible in humanistic inquiry, and even in business applications). In other words, if we have to pick out the most representative feelings of people about what they’re experiencing, what are they feeling about that experience?

After all, can you really say that weather is good or bad, that there being a lot of farm fields is good or bad? I looked at 19th-century British women’s travel narratives of “exotic” places, and I found that their sentiment was often just observations about trains and the landscape and the people. They didn’t talk about whether they were feeling positively or negatively about those things; rather, they gave us their subjective judgment of what those things were like.

My take on sentiment analysis, then, is clearly that we need to introduce human judgment to the end of the process, perhaps gathering these representative phrases and adjectives (I lean toward phrases or even whole sentences) and then deciding what we can about them. I don’t even think a human interlocutor could put down a verdict of positive or negative on these observations and judgments – sentiments – that the women had about their experiences and environments. If not even a human could do it, and humans write and train the algorithms, how can the computer do it?

Is there even a point? Does it matter if it’s possible or not? We should be looking for something else entirely.

(I really need to get cracking on this project. Stay tuned for the revised methodology and heuristics, because I hope to write more and share code here as I go along.)

* I’m also trying to write a more extensive and revised paper on this, meant for the new incarnation of LLC.

WORD LAB: a room with a whiteboard

Several years ago, I attended Digital Humanities 2011 at Stanford and had the opportunity to meet with Franco Moretti. When Franco asked what I was interested in, I admitted that I badly wanted to see the Literary Lab I’d heard so much about, and seen so much interesting research come out of. He laughed and said he’d show it to me, but that I shouldn’t get too excited.

Why? Because Literary Lab is a windowless conference room in the middle of the English department at Stanford. Literary Lab is a room with a whiteboard.

I couldn’t have been more excited, to Franco’s amusement.

A room with a whiteboard. A room dedicated to talking about projects, to collaborating, to bringing a laptop and getting research done, and to sharing and brainstorming via drawing and notes up on a wall, not on a piece of paper or a shared document. It was an important moment for me.

When I was in graduate school, I’d tossed around a number of projects with colleagues, and gotten excited about a lot of them. But they always petered out, lost momentum, and disappeared. This is surely due to busy schedules and competing projects – not least the dissertation – but I think it’s also partly due to logistics.

Much as our work has gone online, and despite these being digital projects – just like Literary Lab’s research – a physical space is still hugely important. A space to talk, a space to brainstorm and draw and write, a space to work together: a space to keep things going.

I had been turning this over in my head ever since I met with Franco, but never had the opportunity to put my idea into action. Then I came to Penn, and met a like-minded colleague who got just as excited about the idea of dedicated space and collective work on projects as I was.

Our boss thought the idea of a room with a whiteboard was funny, just as Franco had thought my low standards were kind of silly. But you know what? You don’t need a budget to create ideas and momentum. You don’t need a budget to stimulate discussion and cross-disciplinary cooperation. You just need space and time, and willing participants who can make use of it. We made a proposal, got the go-ahead, and took advantage of a new room in our Kislak Center at Penn that was free for an hour and a half a week. It was enough: the Vitale II lab is a room with a whiteboard. It even has giant TVs to hook up a laptop.

Thus, WORD LAB was born: a text-analysis interest group that just needed space to meet, and people to populate it. We recruited hard, mailing every department and discipline list we could think of, and got a mind-boggling 15+ people at the first meeting, plus the organizers and some interested library staff, from across the university. The room was full.

That was the beginning of September 2014. WORD LAB is still going strong, with more formal presentations every other week, interspersed with journal club/coding tutorials/etc. in OPEN LAB on the other weeks. We get a regular attendance of at least 7-10 people a week, and the faces keep changing. It’s a group of Asianists, an Islamic law scholar, Annenberg School of Communication researchers, political scientists, psychologists, and librarians, some belonging to more than one group. We’ve had presentations from Penn staff, other regional university researchers, and upcoming Skype presentations from Chicago and Northeastern.

A room with a whiteboard has turned into a budding cross-disciplinary, cross-professional text analysis interest community at Penn.

Keep up on WORD LAB:
@upennwordlab on Twitter
WORD LAB on Facebook

academic death squad

Are you interested in joining a supportive academic community online? A place to share ideas, brainstorming, motivation and inspiration, and if you’re comfortable, your drafts and freewriting and blogging for critique? If so, Academic Death Squad may be for you.

This is a Google group that I believe can be accessed publicly (although I’ve had some issues with signing up with non-Gmail addresses) although you appear to have to be logged in to Google to view the group’s page. Just put in a request to join and I’ll approve you. Or, if that doesn’t work, email me at mdesjardin (at) gmail.com.

Link: [Academic Death Squad]

I’m trying to get as many disciplines and geographic/chronological areas involved as possible, so all are welcome. And I especially would love to have diversity in careers, mixing in tenure-track faculty, adjuncts, grad students, staff broadly interpreted, librarians, museum curators, and independent scholars – and any other career path you can think of. Many of us not in grad student or faculty land have very little institutional support for academic research, so let’s support each other virtually.

In fact, one member has already posted a publication-ready article draft for last-minute comments, so we even have a little activity already!

Best regards and best wishes for this group. Please email me or comment on this post if you have questions, concerns, or suggestions.

よろしくお願いいたします!

*footnote: The name came originally based on a group I ran called “Creative Death Squad” but the real origin is an amazing t-shirt I used to own in Pittsburgh that read “412 Vegan Death Squad” and had a picture of a skull with a carrot driven through it. I hope the name connotates badass-ness, serious commitment to our research, and some casual levity. Take it as you will.

Pre-processing Japanese literature for text analysis

I recently wrote a small script to perform a couple of functions for pre-processing Aozora Bunko texts (text files of public domain, modern Japanese literature and non-fiction) to be used with Western-oriented text analysis tools, such as Voyant, other TAPoR tools, and MALLET. Whereas Japanese text analysis software focuses largely on linguistics (tagging parts of speech, lemmatizing, etc.), Western tools open up possibilities for visualization, concordances, topic modeling, and other various modes of analysis.

Why do these Aozora texts need to be processed? Well, a couple of issues.

  1. They contain ruby, which are basically glosses of Chinese characters that give their pronunciation. These can be straightforward pronunciation help, or actually different words that give added meaning and context. While I have my issues with removing ruby, it’s impossible to do straightforward tool-based analysis without removing it, and many people who want to do this kind of analysis want it to be removed.
  2. The Aozora files are not exactly plain text: they’re HTML. The HTML tags and Aozora metadata (telling where the text came from, for example) need to be removed before analysis can be performed.
  3. There are no spaces between words in Japanese, but Western text analysis tools identify words by looking at where there are spaces. Without inserting spaces, it looks like each line is one big word. So I needed to insert spaces between the Japanese words.

How did I do it? My approach, because of my background and expertise, was to create a Python script that used a couple of helpful libraries, including BeautifulSoup for ruby removal based on HTML tags, and TinySegmenter for inserting spaces between words. My script requires you to have these packages installed, but it’s not a big deal to do so. You then run the script in a command line prompt. The way it works is to look for all .html files in a directory, load them and run the pre-processing, then output each processed file with the same filename, .txt ending, a plain text UTF-8 encoded file.

The first step in the script is to remove the ruby. Helpfully, the ruby is contained in several HTML tags. I had BeautifulSoup traverse the file and remove all elements contained within these tags; it removes both the tags and content.

Next, I used a very simple regular expression to remove everything in brackets – i.e. the HTML tags. This is kind of quick and dirty, and won’t work on every file in the universe, but in Aozora texts everything inside a bracket is an HTML tag, so it’s not a problem here.

Finally, I used TinySegmenter on the resulting HTML-free text to split the text into words. Luckily for me, it returns an array of words – basically, each word is a separate element in a list like [‘word1’, ‘word2’, … ‘wordn’] for n words. This makes my life easy for two reasons. First, I simply joined the array with a space between each word, creating one long string (the outputted text) with spaces between each element in the array (words). Second, it made it easy to just remove the part of the array that contains Aozora metadata before creating that string. Again, this is quick and dirty, but from examining the files I noted that the metadata always comes at the end of the file and begins with the word 底本 (‘source text’). Remove that word and everything after it, and then you have a metadata-free file.

Write this resulting text into a plain text file, and you have a non-ruby, non-HTML, metadata-free, whitespace-delimited Aozora text! Although you have to still download all the Aozora files individually and then do what you will with the resulting individual text files, it’s an easy way to pre-process this text and get it ready for tool-based (and also your-own-program-based) text analysis.

I plan to put the script on GitHub for your perusal and use (and of course modification) but for now, check it out on my Japanese Text Analysis research guide at Penn.

#dayofDH Meiroku zasshi 明六雑誌 project

It’s come to my attention that Fukuzawa Yukichi’s (and others’) early Meiji (1868-1912) journal, Meiroku zasshi 明六雑誌, is available online not just as PDF (which I knew about) but also as a fully tagged XML corpus from NINJAL (and oh my god, it has lemmas). All right!

Screen Shot 2014-04-08 at 11.09.55 AM

I recently met up with Mark Ravina at Association for Asian Studies, who brought this to my attention, and we are doing a lot of brainstorming about what we can do with this as a proof-of-concept project, and then move on to other early Meiji documents. We have big ideas like training OCR to recognize the difference between the katakana and kanji 二, for example; Meiji documents generally break OCR for various reasons like this, because they’re so different from contemporary Japanese. It’s like asking Acrobat to handle a medieval manuscript, in some ways.

But to start, we want to run the contents of Meiroku zasshi through tools like MALLET and Voyant, just to see how they do with non-Western languages (don’t expect any problems, but we’ll see) and what we get out of it. I’d also be interested in going back to the Stanford Core NLP API and seeing what kind of linguistic analysis we can do there. (First, I have to think of a methodology.  :O)

In order to do this, we need whitespace-delimited text with words separated by spaces. I’ve written about this elsewhere, but to sum up, Japanese is not separated by spaces, so tools intended for Western languages think it’s all one big word. There are currently no easy ways I can find to do this splitting; I’m currently working on an application that both strips ruby from Aozora bunko texts AND splits words with a space, but it’s coming slowly. How to get this with Meiroku zasshi in a quick and dirty way that lets us just play with the data?

So today after work, I’m going to use Python’s eTree library for XML to take the contents of the word tags from the corpus and just spit them into a text file delimited by spaces. Quick and dirty! I’ve been meaning to do this for weeks, but since it’s a “day of DH,” I thought I’d use the opportunity to motivate myself. Then, we can play.

Exciting stuff, this corpus. Unfortunately most of NINJAL’s other amazing corpora are available only on CD-ROMs that work on old versions of Windows. Sigh. But I’ll work with what I’ve got.

So that’s your update from the world of Japanese text analysis.