Category Archives: programming

Taiyō project: first steps with data

As I begin working on my project involving Taiyō magazine, I thought I’d document what I’m doing so others can see the process of cleaning the data I’ve gotten, and then experimenting with it. This is the first part in that series: first steps with data, cleaning it, and getting it ready for analysis. If I have the Taiyō data in “plain text,” what’s there to clean? Oh, you have no idea.

taiyo_data Continue reading Taiyō project: first steps with data

thinking about ‘sentiment analysis’

I just got off the phone with a researcher this morning who is interested in looking at sentiment analysis on a corpus of fiction, specifically by having some native speakers of Japanese (I think) tag adjectives as positive or negative, then look at the overall shape of the corpus with those tags in mind.

A while back, I wrote a paper about geoparsing and sentiment analysis for a class, describing a project I worked on. Talking to this researcher made me think back to this project – which I’m actually currently trying to rewrite in Python and then make work on some Japanese, rather than Victorian English, texts – and my own definition of sentiment analysis for humanistic inquiry.*

How is my definition of sentiment analysis different? How about I start with the methodology? What I did was look for salient adjectives, which I searched for by looking at most “salient” nouns (not necessarily the most frequent, but I need to refine my heuristics) and then the adjectives that appeared next to them. I also used Wordnet to look for words related to these adjectives and nouns to expand my search beyond just those specific words to ones with similar meaning that I might have missed (in particular, I looked at hypernyms (broader terms) and synonyms of nouns, and synonyms of adjectives).

My method of sentiment analysis ends up looking more like automatic summarization than a positive-negative sentiment analysis we more frequently encounter, even in humanistic work such as Matt Jockers’s recent research. I argue, of course, that my method is somewhat more meaningful. I consider all adjectives to be sentiment words, because they carry subjective judgment (even something that’s kind of green might be described by someone else as also kind of blue). And I’m more interested in the character of subjective judgment than whether it should be able to be considered ‘objectively’ as positive or negative (something I don’t think is really possible in humanistic inquiry, and even in business applications). In other words, if we have to pick out the most representative feelings of people about what they’re experiencing, what are they feeling about that experience?

After all, can you really say that weather is good or bad, that there being a lot of farm fields is good or bad? I looked at 19th-century British women’s travel narratives of “exotic” places, and I found that their sentiment was often just observations about trains and the landscape and the people. They didn’t talk about whether they were feeling positively or negatively about those things; rather, they gave us their subjective judgment of what those things were like.

My take on sentiment analysis, then, is clearly that we need to introduce human judgment to the end of the process, perhaps gathering these representative phrases and adjectives (I lean toward phrases or even whole sentences) and then deciding what we can about them. I don’t even think a human interlocutor could put down a verdict of positive or negative on these observations and judgments – sentiments – that the women had about their experiences and environments. If not even a human could do it, and humans write and train the algorithms, how can the computer do it?

Is there even a point? Does it matter if it’s possible or not? We should be looking for something else entirely.

(I really need to get cracking on this project. Stay tuned for the revised methodology and heuristics, because I hope to write more and share code here as I go along.)

* I’m also trying to write a more extensive and revised paper on this, meant for the new incarnation of LLC.

WORD LAB: a room with a whiteboard

Several years ago, I attended Digital Humanities 2011 at Stanford and had the opportunity to meet with Franco Moretti. When Franco asked what I was interested in, I admitted that I badly wanted to see the Literary Lab I’d heard so much about, and seen so much interesting research come out of. He laughed and said he’d show it to me, but that I shouldn’t get too excited.

Why? Because Literary Lab is a windowless conference room in the middle of the English department at Stanford. Literary Lab is a room with a whiteboard.

I couldn’t have been more excited, to Franco’s amusement.

A room with a whiteboard. A room dedicated to talking about projects, to collaborating, to bringing a laptop and getting research done, and to sharing and brainstorming via drawing and notes up on a wall, not on a piece of paper or a shared document. It was an important moment for me.

When I was in graduate school, I’d tossed around a number of projects with colleagues, and gotten excited about a lot of them. But they always petered out, lost momentum, and disappeared. This is surely due to busy schedules and competing projects – not least the dissertation – but I think it’s also partly due to logistics.

Much as our work has gone online, and despite these being digital projects – just like Literary Lab’s research – a physical space is still hugely important. A space to talk, a space to brainstorm and draw and write, a space to work together: a space to keep things going.

I had been turning this over in my head ever since I met with Franco, but never had the opportunity to put my idea into action. Then I came to Penn, and met a like-minded colleague who got just as excited about the idea of dedicated space and collective work on projects as I was.

Our boss thought the idea of a room with a whiteboard was funny, just as Franco had thought my low standards were kind of silly. But you know what? You don’t need a budget to create ideas and momentum. You don’t need a budget to stimulate discussion and cross-disciplinary cooperation. You just need space and time, and willing participants who can make use of it. We made a proposal, got the go-ahead, and took advantage of a new room in our Kislak Center at Penn that was free for an hour and a half a week. It was enough: the Vitale II lab is a room with a whiteboard. It even has giant TVs to hook up a laptop.

Thus, WORD LAB was born: a text-analysis interest group that just needed space to meet, and people to populate it. We recruited hard, mailing every department and discipline list we could think of, and got a mind-boggling 15+ people at the first meeting, plus the organizers and some interested library staff, from across the university. The room was full.

That was the beginning of September 2014. WORD LAB is still going strong, with more formal presentations every other week, interspersed with journal club/coding tutorials/etc. in OPEN LAB on the other weeks. We get a regular attendance of at least 7-10 people a week, and the faces keep changing. It’s a group of Asianists, an Islamic law scholar, Annenberg School of Communication researchers, political scientists, psychologists, and librarians, some belonging to more than one group. We’ve had presentations from Penn staff, other regional university researchers, and upcoming Skype presentations from Chicago and Northeastern.

A room with a whiteboard has turned into a budding cross-disciplinary, cross-professional text analysis interest community at Penn.

Keep up on WORD LAB:
@upennwordlab on Twitter
WORD LAB on Facebook

Pre-processing Japanese literature for text analysis

I recently wrote a small script to perform a couple of functions for pre-processing Aozora Bunko texts (text files of public domain, modern Japanese literature and non-fiction) to be used with Western-oriented text analysis tools, such as Voyant, other TAPoR tools, and MALLET. Whereas Japanese text analysis software focuses largely on linguistics (tagging parts of speech, lemmatizing, etc.), Western tools open up possibilities for visualization, concordances, topic modeling, and other various modes of analysis.

Why do these Aozora texts need to be processed? Well, a couple of issues.

  1. They contain ruby, which are basically glosses of Chinese characters that give their pronunciation. These can be straightforward pronunciation help, or actually different words that give added meaning and context. While I have my issues with removing ruby, it’s impossible to do straightforward tool-based analysis without removing it, and many people who want to do this kind of analysis want it to be removed.
  2. The Aozora files are not exactly plain text: they’re HTML. The HTML tags and Aozora metadata (telling where the text came from, for example) need to be removed before analysis can be performed.
  3. There are no spaces between words in Japanese, but Western text analysis tools identify words by looking at where there are spaces. Without inserting spaces, it looks like each line is one big word. So I needed to insert spaces between the Japanese words.

How did I do it? My approach, because of my background and expertise, was to create a Python script that used a couple of helpful libraries, including BeautifulSoup for ruby removal based on HTML tags, and TinySegmenter for inserting spaces between words. My script requires you to have these packages installed, but it’s not a big deal to do so. You then run the script in a command line prompt. The way it works is to look for all .html files in a directory, load them and run the pre-processing, then output each processed file with the same filename, .txt ending, a plain text UTF-8 encoded file.

The first step in the script is to remove the ruby. Helpfully, the ruby is contained in several HTML tags. I had BeautifulSoup traverse the file and remove all elements contained within these tags; it removes both the tags and content.

Next, I used a very simple regular expression to remove everything in brackets – i.e. the HTML tags. This is kind of quick and dirty, and won’t work on every file in the universe, but in Aozora texts everything inside a bracket is an HTML tag, so it’s not a problem here.

Finally, I used TinySegmenter on the resulting HTML-free text to split the text into words. Luckily for me, it returns an array of words – basically, each word is a separate element in a list like [‘word1’, ‘word2’, … ‘wordn’] for n words. This makes my life easy for two reasons. First, I simply joined the array with a space between each word, creating one long string (the outputted text) with spaces between each element in the array (words). Second, it made it easy to just remove the part of the array that contains Aozora metadata before creating that string. Again, this is quick and dirty, but from examining the files I noted that the metadata always comes at the end of the file and begins with the word 底本 (‘source text’). Remove that word and everything after it, and then you have a metadata-free file.

Write this resulting text into a plain text file, and you have a non-ruby, non-HTML, metadata-free, whitespace-delimited Aozora text! Although you have to still download all the Aozora files individually and then do what you will with the resulting individual text files, it’s an easy way to pre-process this text and get it ready for tool-based (and also your-own-program-based) text analysis.

I plan to put the script on GitHub for your perusal and use (and of course modification) but for now, check it out on my Japanese Text Analysis research guide at Penn.

#dayofDH Meiroku zasshi 明六雑誌 project

It’s come to my attention that Fukuzawa Yukichi’s (and others’) early Meiji (1868-1912) journal, Meiroku zasshi 明六雑誌, is available online not just as PDF (which I knew about) but also as a fully tagged XML corpus from NINJAL (and oh my god, it has lemmas). All right!

Screen Shot 2014-04-08 at 11.09.55 AM

I recently met up with Mark Ravina at Association for Asian Studies, who brought this to my attention, and we are doing a lot of brainstorming about what we can do with this as a proof-of-concept project, and then move on to other early Meiji documents. We have big ideas like training OCR to recognize the difference between the katakana and kanji 二, for example; Meiji documents generally break OCR for various reasons like this, because they’re so different from contemporary Japanese. It’s like asking Acrobat to handle a medieval manuscript, in some ways.

But to start, we want to run the contents of Meiroku zasshi through tools like MALLET and Voyant, just to see how they do with non-Western languages (don’t expect any problems, but we’ll see) and what we get out of it. I’d also be interested in going back to the Stanford Core NLP API and seeing what kind of linguistic analysis we can do there. (First, I have to think of a methodology.  :O)

In order to do this, we need whitespace-delimited text with words separated by spaces. I’ve written about this elsewhere, but to sum up, Japanese is not separated by spaces, so tools intended for Western languages think it’s all one big word. There are currently no easy ways I can find to do this splitting; I’m currently working on an application that both strips ruby from Aozora bunko texts AND splits words with a space, but it’s coming slowly. How to get this with Meiroku zasshi in a quick and dirty way that lets us just play with the data?

So today after work, I’m going to use Python’s eTree library for XML to take the contents of the word tags from the corpus and just spit them into a text file delimited by spaces. Quick and dirty! I’ve been meaning to do this for weeks, but since it’s a “day of DH,” I thought I’d use the opportunity to motivate myself. Then, we can play.

Exciting stuff, this corpus. Unfortunately most of NINJAL’s other amazing corpora are available only on CD-ROMs that work on old versions of Windows. Sigh. But I’ll work with what I’ve got.

So that’s your update from the world of Japanese text analysis.

Japanese tokenization – tools and trials

I’ve been looking (okay, not looking, wishing) for a Japanese tokenizer for a while now, and today I decided to sit down and do some research into what’s out there. It didn’t take long – things have improved recently.

I found two tools quickly: kuromoji Japanese morphological analyzer and the U-Tokenizer CJK Tokenizer API.

First off – so what is tokenization? Basically, it’s separating sentences by words, or documents by sentences, or any text by some unit, to be able to chunk that text into parts and analyze them (or do other things with them). When you tokenize a document by word, like a web page, you enable searching: this is how Google finds individual words in documents. You can also find keywords from a document this way, by writing an algorithm to choose the most meaningful nouns, for example. It’s also the first step in more involved linguistic analysis like part-of-speech tagging (thing, marking individual words as nouns, verbs, and so on) and lemmatizing (paring words down to their stems, such as removing plural markers and un-conjugating verbs).

This gives you a taste of why tokenization is so fundamental and important for text analysis. It’s what lets you break up an otherwise unintelligible (to the computer) string of characters into units that the computer can attempt to analyze. It can index them, search them, categorize them, group them, visualize them, and so on. Without this, you’re stuck with “words” that are entire sentences or documents, that the computer thinks are individual units based on the fact that they’re one long string of characters.

Usually, the way you tokenize is to break up “words” based on spaces (or sentences based on punctuation rules, etc., although that doesn’t always work). (I put “words” in quotes because you can really make any kind of unit you want, the computer doesn’t understand what words are, and in the end it doesn’t matter. I’m using “words” as an example here.) However, for languages like Japanese and Chinese (and to a lesser extent Korean) that don’t use spaces to delimit all words (for example, in Korean particles are attached to nouns with no space in between, like saying “athome” instead of “at home”), you run into problems quickly. How to break up texts into words when there’s no easy way to distinguish between them?

The question of tokenizing Japanese may be a linguistic debate. I don’t know enough about linguistics to begin to participate in it, if it is. But I’ll quickly say that you can break up Japanese based on linguistic rules and dictionary rules – understanding which character compounds are nouns, which verb conjugations go with which verb stems (as opposed to being particles in between words), then breaking up common particles into their own units. This appears to be how these tools are doing it. For my own purposes, I’m not as interested in linguistic patterns as I am in noun and verb usage (the meaning rather than the kind) so linguistic nitpicking won’t be my area anyway.

Moving on to the tools. I put them through the wringer: Higuchi Ichiyō’s Ame no yoru, the first two lines, from Aozora bunko.

One, kuromoji, is the tokenizer behind Solr and Lucene. It does a fairly good job, although with Ichiyō’s uncommon word usage and conjugation, it faltered and couldn’t figure out that 高やか is one word; rather it divided it into 高 や か.  It gives the base form, reading, and pronunciation, but nothing else. However, in the version that ships with Solr/Lucene, it lemmatizes. Would that ever make me happy. (That’s, again, reducing a word to its base form, making it easy to count all instances of both “people” and “person” for example, if you’re just after meaning.) I would kill for this feature to be integrated with the below tool.

The other, U-Tokenizer, did significantly better, but its major drawback is that it’s done in the form of an HTTP request, meaning that you can’t put in entire documents (well, maybe you could? how much can you pass in an HTTP request?). If it were downloadable code with an API, I would be very happy (kuromoji is downloadable and has a command line interface). U-Tokenizer figured out that 高やか is one word, and also provides a list of “keywords,” which as far as I can tell is a bunch of salient nouns. I used it for a very short piece of text, so I can’t comment on how many keywords it would come up with for an entire document. The documentation on this is sparse, and it’s not open source, so it’s impossible to know what it’s doing. Still, it’s a fantastic tool, and also seems to work decently for Chinese and Korean.

Each of these tools has its strengths, and both are quite usable for modern and contemporary Japanese. (I really was cruel to feed them Ichiyō.) However, there is a major trial involved in using them with freely-available corpora like Aozora bunko. Guess what? Preprocessing ruby.

Aozora texts contain ruby marked up within the documents. I have my issues with stripping out ruby from documents that heavily use them (like Meiji writers, for example) because they add so much meaning to the text, but let’s say for argument’s sake that we’re not interested in the ruby. Now, it’s time to cut it all out. If I were a regular expressions wizard (or even had basic competency with them) I could probably strip this out easily, but it’s still time consuming. Download text, strip out ruby and other metadata, save as plain text. (Aozora texts are XHTML, NOT “plain text” as they’re often touted to be.) Repeat. For topic modeling using a tool like MALLET, you’re going to want to have hundreds of documents at the end of it. For example, you might be downloading all Meiji novels from Aozora and dividing them into chunks or chapters. Even the complete works of Natsume Sōseki aren’t enough without cutting them down into chapters or even paragraphs to make enough documents to use a topic modeling tool effectively. Possibly, run all these through a part-of-speech tagger like KH Coder. This is going to take a significant amount of time.

Then again, preprocessing is an essential and extremely time-consuming part of almost any text analysis project. I went through a moderate amount of work just removing Project Gutenberg metadata and dividing into chapters a set of travel narratives that I downloaded in plain text, thankfully not in HTML or XML. It made for easy processing. With something that’s not already real plain text, with a lot of metadata, and with a lot of ruby, it’s going to take much more time and effort, which is more typical of a project like this. The digital humanities are a lot of manual labor, despite the glamorous image and the idea that computers can do a lot of manual labor for us. They are a little finicky with what they’ll accept. (Granted, I’ll be using a computer script to strip out the XHTML and ruby tags, but it’s going to take work for me to write it in the first place.)

In conclusion? Text analysis, despite exciting available tools, is still hard and time consuming. There is a lot of potential here, but I also see myself going through some trials to get to the fun part, the experimentation. Still, stay tuned, especially for some follow-up posts on these tools and KH Coder as I become more familiar with them. And, I promise to stop being difficult and giving them Ichiyō’s Meiji-style bungo.

don’t learn to code

There is a lot of speculating going on, on the Internet, at conferences, everywhere, about the ways in which we might want to integrate IT skills – for lack of a better word – with humanities education. Undergrads, graduate students, faculty. They all need some marketable tech skills at the basis of their education in order to participate in the intellectual world and economy of the 21st century.

I hear a lot, “learn to code.” In fact, my alma mater has a required first-semester course for all information science students, from information retrieval specialists to preservationists, to do just that, in Python. Others recommend Ruby. They rightly stay away from the language of my own training, C++, or god forbid, Java. Coding seems to mean scripting, which is fine with me for the purposes of humanities education. We’re not raising software engineers here. We tend to hire those separately.*

I recently read a blog post that advocated for students to “learn a programming language” as part of a language requirement for an English major. (Sorry, the link has been buried in more recent tweets by now.) You’d think I would be all about this. I’m constantly urging that humanities majors acquire enough tech skills to at least know what others are talking about when they might collaborate with them on projects in the future. It also allows one to experiment without the need for hiring a programmer at the outset of a project.

But how much experimentation does it actually allow? What can you actually get done? My contention is: not very much.

If you’re an English major who’s taken CS101 and “learned a programming language,” you have much less knowledge than you think you do. This may sound harsh, but it’s not until the second-semester, first-year CS courses that you even get into data structures and algorithms, the building blocks of programming. Even at that point, you’re just barely starting to get an idea of what you’re doing. There’s a lot more to programming than learning syntax.

In fact, I’d say that learning syntax is not the point. The point is to learn a new way of thinking, the way(s) of thinking that are required for creating programs that do something interesting and productive, that solve real problems. “Learning a programming language,” unless done very well (for example in a book like SICP), is not going to teach you this.

I may sound disdainful or bitter here, but I feel this must be said. It’s frankly insulting as someone who has gone through a CS curriculum to hear “learn a programming language” as if that’s going to allow one to “program” or “code.” Coding isn’t syntax, and it’s not learning how to print to the screen. Those are your tools, but not everything. You need theory and design, the big ideas and patterns that allow you to do real problem-solving, and you’re not going to get that from a one-semester Python course.

I don’t think there’s no point to trying to learn a programming language if you don’t currently know how to program. But I wish the strategies generally being recommended were more holistic. Learning a programming language is a waste of time if you don’t have concepts that you can use it to express.

 

* I’m cursed by an interdisciplinary education, in a way. I have a CS degree but no industry experience. I program both for fun and for work, and I know a range of languages. I’m qualified in that way for many DH programming jobs, but they all require several years of experience that I passed up while busy writing a Japanese literature dissertation. I’ve got a bit too much humanities for some DH jobs, and too little (specifically teaching experience) for others.

programming practice problems

One of the hardest things for me about learning a new programming language is not getting an understanding of the syntax or overarching concepts (like object-oriented programming or recursion), but rather a lack of opportunity for practice. It’s one thing to read a few books about Python, and quite another to look at others’ nontrivial code, or write nontrivial code yourself.

However, I’m often at a loss for ideas when I try to come up with programming projects for myself. Call me uninspired, but I just don’t have many needs for writing programs in my daily life, especially complex ones. And I don’t have any big creative ideas, either. I don’t even have uncreative ideas. So what to do?

It turns out there are a few good resources online for practice programming problems. They’re language-agnostic, presenting a problem and asking you for its solution. Unfortunately, there are only a few resources for this, but I thought I’d share the ones I found.

The first is the Association for Computing Machinery’s International Collegiate Programming Contest. This provides the contest problems from 1974 to the present! Talk about a treasure trove of programming challenges.

Second, UVa Online Judge. This site contains hundreds of programming problems, some simple and some complex. They have volume upon volume of problem sets. You could spend the rest of your life doing the problems on this site.

Does anyone have additional resources to add?

the tradeoff: elegance vs. performance

Oh snap – I just fixed this by turning on caching in the Cocoon sitemap. Thanks Brian Pytlik Zillig for pointing out that this is where that functionality is useful! And note to self (and all of us): asking questions when you’re torn between solutions can lead to a third solution that does much better than either of the ones you came up with.

With programming or web design, “clean and elegant” is a satisfaction for me second only to “it’s working by god it’s finally doing what it’s supposed to.”  So what am I to do when I’ve got a perfectly clean and elegant solution – one that involves zero data entry and only takes up a handful of lines in my XSLT stylesheets – that crunches browser speed so hard that it takes nearly a minute to load the homepage of my application?

I’ve got a choice here: Two XML files (one for each problem area) that list all of the data that I’d otherwise dynamically be grabbing out of all files sitting in a certain directory. This is time-consuming and not very elegant (although it certainly could be worse). The worst part is that it requires explicit maintenance on the part of the user. Wouldn’t it be nice to be able to give my application to any person who has a directory of XML files without any need for them to hand-customize it, even just a small part?

On the other hand, I can’t expect Web users to sit there and wait at least 30 seconds for TokenX to dynamically generate its list of texts, an action that would take a split second if it were only loading the data out of an XML file. I already have all the site menu data stored in XML for retrieval, meaning that modifications need only take place once and that nested menus can be easily entered without having to worry about the algorithm I’m using to make them appear nested on the screen in the final product.

You can tell from reading my thought process here what the solution is going to be. It’s too bad, because aiming for elegance often ends up leading you to better performance at the same time. Practicality vs. idealism: the eternal question to which we already know the answer.

trusting the computer, and getting there with XSLT

If you are working in a functional, stateless language, but can still get away with for loops in a more conventional way thanks to for-each functions – should you still favor recursion over explicit for loops? Discuss.

Now that I am, as the title implies, “getting there,” I want to reflect a little on the learning process that has been XSLT. In my last post I glossed over what makes it (and functional programming languages generally) distinctive and, for people who are used to procedural languages, unintuitive and hard to grasp at first. This will be a post with several simple points, but that’s quite in keeping with the theme.

The major shift in thinking that needs to happen when working with XSLT, in my opinion, is one of trusting the computer more than we are accustomed to. It all stems from letting go of telling the computer how exactly to figure out when to execute sections of code, and letting it make the decisions for us.

I made a comment recently: “I know I’m getting more comfortable with XSLT because suddenly I’m trying to use recursion everywhere I can, and avoiding the for-loop like a crutch.” As others I talked to put it, this is idiomatic XSLT.*; In other words, it’s one of the mental leaps that you (and I) have to make in order to start writing elegant and functional code (no pun intended) using this language.

What is recursion? In this case, to oversimplify, it’s how XSLT loops.** In a procedural language – C++, Java, most languages other than Lisp dialects to be honest – recursion is clunky and wasteful; telling the computer to specifically “do this for the number of times I tell you, or until this thing reaches this state” is how you get things done. This means that the languages have state, too – you can change the value of variables. This is important for having counters that are the backbone of those loops. If there were no variable to increment or change in another way, the loop would either never execute (such as a while), only execute once, or loop endlessly. None of these things are very helpful.

So how do you get away with counter-based loop, at least of the “for each thing in this set” variety, with a stateless language (all variables are permanent, aka constants) that discourages use of for-each loops in the first place?

The first is much simpler: xsl:apply-templates or xsl:call-template. This involves the trust that I introduced above. With a procedural language it’s hard to trust the computer to take care of things without your telling it exactly how to do it (keep doing this thing until a condition is met) because you’ve had to become so used to it. It might have been hard to get used to having to explain the proverbial peanut butter sandwich recipe in excruciating detail for the sandwich to get made. Now, XSLT is forcing you to go back to the higher level of trust, where you can tell the computer “do this for all X” without telling it how it’s going to do that.

xsl:apply-templates simply means, “for all X, do Y.” (The Y is in the template.) It’s unsettling and worrying, at least for me at first, to just leave this up to the computer. There’s no guarantee that templates will ever be executed, or that they will be executed in order. How can I trust that this is going to turn out okay? Yet, with judicious application of xsl:apply-templates (like, where you want the results to be), it will happen.

Second, the recursive aspect. Keep calling the template until there are no more things left – whether that’s a counter, or a set of stuff. But how to get a counter without being able to change the variable? With each xsl:apply-templates (or call-template), do so with xsl:with-param, and adjust the parameter as needed. Call it with the rest of the set but not the thing that is being modified in the current template. When it runs out of stuff, that is when results are returned. Again, it takes the explicit instruction – xsl:for-each is very heavy-handed – and turns it into “if there’s anything left, keep on doing this.” It may seem from my description that there’s no real difference between these two, and in their end result, there isn’t. But this is a big leap, and moving from instinctively reaching for xsl:for-each to xsl:apply-templates is conceptually profound. It is getting XSLT.

Finally, a note on the brevity and simplicity of XSLT. I’ve noticed that once I’ve found a good, relatively elegant solution to what I’m trying to do (they can’t always be!), suddenly my code becomes very short and very simple. It’s not hard to write and I don’t type for a long time. It’s the thinking and planning that takes up the time. Obviously this is true for programming just about anything, but I find myself doing a whole lot less typing this summer than usual (compared to languages I’ve used such as C, C++, Java, Python).

It’s both satisfying and disappointing at the same time: getting a template that recursively creates arbitrary nested menus wants to make me jump up and high five myself; the fact that it’s only about four lines and incredibly simple makes me wonder if any of it was that hard to begin with. But this isn’t limited to XSLT or even programming: the 90-page thesis seems like more work than the 40-page thesis, but if the shorter one is talking about more profound ideas and/or is simply more well-written, the length and time comparison falls apart. The time spent typing and the length of the output doesn’t tell us as much as we’re used to assuming.

That’s what I have to say about what I’ve been doing this summer, as far as learning XSLT goes. I still can’t say I like it. The syntax is maddening. I haven’t been in this long enough to judge whether it’s the best choice for getting something done within a lot of constraints. But at the very least I’ve finally had that brain shift again, the one I had with Lisp so long ago, to a different approach to problem-solving entirely. And that feeling is profoundly gratifying.

Speaking of a good feeling, I’ve been able to have extended chats with multiple people about XSLT on the U of M School of Information mailing list this summer after someone posted asking for help with it. It’s a good thing I replied despite thinking “I’m not an expert, so I probably don’t have much to offer.” Talking with the questioner and the others who replied-all on our emails was really enlightening, both by getting feedback, hearing others’ questions about how the language works (questions that I hadn’t articulated very well), and also giving my own feedback. There’s nothing like teaching to help you learn. I would not have been able to write this post before talking to my fellow students and figuring it out together. (Or, you would have read a very unclear and aimless post.)

(Very last, I’d like to recommend the O’Reilly book XSLT Cookbook for using this language regularly after getting acquainted with it. If I were continuing on with an XSLT project after this internship, or working on adding more to this one, I’d be using this book for suggestions.)

* Thank you all for reminding me that this word exists.

** XSLT now includes not only the for-each loop, but also the xs:for tag. These do have their appropriate uses and I do use them quite a lot, because my application doesn’t give me a huge number of chances for recursion. I’m being dramatic to make a point.

Cross-posted from the iSchools & Digital Humanities intern blog