In York I received a flyer that was uniquely individualized. A couple of weeks prior, for Thanksgiving dinner, my partner Emily and I ordered out from Pizza Hut and we bought two “Create Your Own” pizzas. In the advert we received later, there is a picture which is presumably Pizza Hut’s portrayal of a normal “Create Your Own Pizza” and it states that “Your Create Your Own Pizza was just the beginning . . .”. We realized that the ad was tailored to our past buying trends but the same ad, with a change in type of pizza on the front, had gone out to our friends’ houses as well. To me it seemed like a printed form of Google’s personalized web ads based on your browsing history. In both cases, each visitor receives a unique response but in essence the website applies the same algorithm to every visitor. On my WordPress blogs, I also receive a lot of spam “comments” in which a stock phrase or statement is pasted into the comment box of blog posts but caught by a program called Askimet. I usually take time to read them all before deleting them and they seem to follow the same format as the ads for Pizza Hut and Google. For instance, there is usually a space filled in by information pertinent to the viewer, usually an outgoing link to other things is present (in Pizza Hut’s case it was printed pictures of additional pizzas), and the use of other acquired information within a set framework. Because the ad was printed like this, I began thinking about fill-in-the-blanks in literature, particularly in terms of genre and mode related to textual features. Continue reading
Tag Archives: docuscope
I think this is the last old post I had to write. This is focused on my final project for Prof. Witmore’s class in May:
Over the course of a semester, Professor Witmore introduced our class to writings about relational patterns and networks, then subsequently applied them to the study of literature. We read books such as Graham Harman’s “Prince of Networks: Bruno Latour and Metaphysics”, Franco Moretti’s “Graphs, Maps, Trees: Abstract Models for a Literary History”, and Alexander, Ishikawa, and Silverstein’s “A Pattern Language: Towns, Buildings, Construction” which slowly coalesced in my mind and led into my final project; a Java program designed to help render Docuscope quality text from a plain or formatted transcription.
Like I mentioned previously, I began this project tired, under a time restraint, and with no idea of where it would lead me. However, I had hope. At a few research meetings previously, Bill Blake had mentioned doing some work with Docuscope in the sense of Hamlet without the Prince. While I had never really seen the result of his work, the idea of a play without its main character or main characters enticed me. It didn’t take me long to find my favorite play, Romeo and Juliet, and sit down at my computer. Referencing my second edition of the Riverside Shakespeare, I cut and pasted lines spoken by each of the characters throughout the play until I had, technically, two different plays. I retained the initial text file, split it up by Act again, and then created a new text file for the whole play, without any of Romeo or Juliet’s lines. I also separated the Acts again, without their lines, and created two new files; one with all of Romeo’s lines and with Juliet’s. The images I came up with are below. The methods remained the same from the previous post: Hierarchical Cluster, Best Guess Analysis, Ward’s test, Colored Coded Dendrogram with a Distance Scale performed at a Cluster level analysis of Docuscope’s output. Continue reading
Just like the start of any good epic, the invocation of my personal muses in the previous post signifies the beginning of what I would like to think is an epic progression in my life. However, the beginnings of my research with digital methods were not so grandiose as what Milton or Homer has left us. I was given Docuscope in February of 2010 and received JMP training in early March. I played around with both of them for a while, but due to school work and the demanding nature of my student organization (I was gone six weekends in a row spanning from late March to early May), I was not able to do as much as I wanted. However, on April 30th a digital salon was hosted by the libraries to “Showcase Digital Arts and Humanities at UW-Madison”. Prof. Witmore, whose blog Wine Dark Sea deals with a lot more of this kind of work and has a link on the right, invited me to present at this conference-style salon, together with him, Bill Blake, and Prof. Valenza. About a week out from the presentation date, I sent Prof. Witmore a PowerPoint with the images clustered below. There are two sets of three, the first of Shakespeare’s Canon and the second of looking at King Lear and Cymbeline solus, divided by acts. All the pictures are JMP generated Hierarchical Clusters, using Frequency Counts from Docuscope and a Ward’s test with best guess analysis and a distance scale dendrogram. (distance scale: distances in the dendrogram are proportional to the actual statistical distance) The first images of both sets are after I ran the test using the Clusters or highest and broadest level of relationship between the data sets. The second in each set is using Dimensions, or the mid-level analysis and the third is using the LAT’s (Language Attribute Types) with the finest grain of similarity. Continue reading