“Merry” Sue Coleman?

This one doesn’t have to do with semantics, but it seems like a Language Log post-to-be, and so I thought I’d beat them to the punch.

The beloved president of the University of Michigan, Mary Sue Coleman, is retiring soon, and she was recognized for her twelve years of service at last Saturday’s home football game against Nebraska.  (Michigan lost, btw.)  She spoke to the crowd during this recognition ceremony at half-time, and she sounds, well, a little more “Merry” than “Mary”:

Today, the University is responding to allegations that she was inebriated with the following story:

. . . during her remarks she used a wireless microphone with which she was not experienced . . . There was significant wind that caused the sound to be delayed and distorted and created feedback during President Coleman’s speech.

Now, IANAP (I am not a phonologist), so I wonder if any of you, dear readers, know if speaking on a wireless microphone at a football stadium can cause you to sound drunk.

Why ask why?

More and more, I’ve become interested in the (often unspoken) connections between sentences in the same discourse. For instance, when someone says

I’m happy. I passed the exam.

they often actually mean

I’m happy because I passed the exam.

and not, for instance,

I’m happy despite the fact that I passed the exam.

or even worse

I’m happy that a small monkey from outer space told my sixth grade teacher that it would be a lie to say that I passed the exam.

So, why can we imagine / pretend that we hear certain connections between sentences, but not others?

One possibility is that sentences tend to raise certain common (or conventional?) questions, which need not be spoken. Subsequent sentences can answer these questions:

I’m happy. <Why?> I passed the exam.

“Why?” sounds natural after almost any statement (just hang out with your average 3-to-4-year old), but questions like “What might have prevented this?” or “What did a small monkey from outer space tell your sixth grade teacher that it would be a lie to say?” just don’t come up as often. Other common follow-ups are:

  • What happened next?
  • Give me an example.
  • So what? / Why do I care?

This is not really an explanation of the original observation, merely a restatement.  So, at the risk of sounding like a 90’s commercial, why ask why? Why are we so fascinated by causes?

Voicemail Semantics

Autoanswer-mikrokasseteI vaguely remember discussing, one day in class at MIT, the meanings of indexicals (squirmy little words that change with the context in which they are spoken) in answering machine messages:

You‘ve reached the Joneses. We‘re not here right now, so please leave a message.

Now that I’m all grown up and have a class of my own, I use this exercise as way to pump everyone’s intuitions about these words.  I first have my students write down what they think these words mean (or refer to) in general. Only then do I ask them what they might mean in a voicemail message (which in some ways is worse than an answering machine).

Many reasonable definitions for indexicals fail in the context of a voicemail greeting:

  • You is not some specific person the speaker has in mind, because they have no idea who will call them (think: wrong number!).
  • I/We/Me is not the entity creating the sound wave which conveys the utterance — that’s the caller’s phone.
  • Here is not the caller’s location, or the location of the cell phone, or the cell phone owner, or even the computer where the voicemail is stored.
  • Now is not (usually) when the speaker spoke; rather, it is when the listener is hearing the message. But imagine leaving a voicemail saying “it’s now 6PM, and I should be home by 10PM if you’d like to try again”…

Endless debates ensue, of course…

(P.S., I just found an article all about this topic: http://aardvark.ucsd.edu/language/ans_machine_phil_compass.pdf .)

GitHub Linguistics

The ProfHacker blog recently hosted a series of posts on GitHub, which is generally used as an online collaboration/version control/code sharing system for programmers. I have never used GitHub (mitcho suggested I post my tree drawing software there, but I haven’t done it yet), but ProfHacker got me thinking about how linguists might use GitHub.

This would be a lot easier on GitHub

This would be a lot easier on GitHub

The blog posts concentrated on how this resource might be used for non-discipline-specific academic tasks, for instance, saving versions of your syllabus (which someone else could “fork” and change for themselves) or collaborating with co-authors around the world. But perhaps we could put GitHub to some more radical uses:

  • Truly massive collaboration on a paper, with people able to fix typos, make/suggest changes, and add data from their own language/area of expertise.
  • Easier workflow for authors and editors/typesetters to work together on a manuscript.
  • Projects in formalized grammars like HPSG, which are even closer to computer code, what GitHub excels at.
  • I could imagine developing some sort of format halfway between an academic paper and a formal, computationally-implemented grammar, like “pseudo-code” for linguistics theories.  This format could be a less wordy way to simply present the theory without arguments for it.
  • Regression testing for theories — collaboratively edited lists of data points that a theory of X should cover.

Any other ideas?

And now…

No. 1 The Larch

Last summer, I made a bare-bones syntax tree application entirely in javascript, which I’m calling the Larch Tree Drawer, after the Monty Python sketch. The software:

  • Is written entirely in JavaScript, so it can do fast real-time updating.
  • Creates a png image which can be easily copied elsewhere.
  • Uses efficiently spaced trees (see [1])
  • Automatically closes brackets.
  • Automatically opens brackets (triggered by an ALL-CAPS node label)

Check it out and let me know any bugs and/or feature requests. I’m considering an option to output qtree-style latex code, but I haven’t implemented it yet.  Also, I know that the dimensions of the png files are a little too large right now.

Using the default options can feel a little weird, but you’ll soon see how fast it can be to input standard trees.  For instance, type “S NP” and you’ll see:

snp

Then, add “John] VP” to get:

JohnVP

Round it out with “V loves] NP Semantics” to get:

JohnlovesSemantics

And you only needed two brackets!

[1] An algorithm from http://billmill.org/pymag-trees/ based on:

Buchheim, Christoph, Michael Jünger, and Sebastian Leipert. “Improving Walker’s algorithm to run in linear time.” Graph Drawing. Springer Berlin Heidelberg, 2002.

Walker, John Q. “A node‐positioning algorithm for general trees.” Software: Practice and Experience 20.7 (1990): 685-705.

Exactly one donkey

Does this caption make sense with this picture?

800px-Katzensee_-_Gut_Katzensee_2011-08-28_16-06-58

Point to the donkey

Sure! Just point to the donkey and not one of the horses. But how about here?

OLYMPUS DIGITAL CAMERA

Point to the donkey

Not so much. It’s weird to talk about the donkey in a picture with more than one of the majestic creatures. Perhaps, we might reason, a phrase of the form “the X” only makes sense in a caption when there is only one X in the picture.

Not so fast! What about this picture and caption?

(Burro,_Egypt.)

Every man on a donkey has his legs around the donkey

Here, the phrase the donkey sounds OK even though there are four donkeys in the picture. How should we change our analysis of “the X” phrases? Well, Irene Heim (following Steve Berman) suggests that the word every makes the difference. Such words (known as quantifiers) allow us to zoom in until we just see one man on a donkey, like this:

(Burro,_Egypt. detail)

This is called a minimal situation involving a man on a donkey. There are three more such minimal situations in the picture, and in each case there is only one donkey. So, our generalization is saved — the donkey may not be unique in the whole picture, but it is unique in each of the zoomed-in minimal situations.

This is Heim’s solution to a problem pointed out by the philosopher Peter Geach via the following somewhat sadistic example:

Every man who owns a donkey beats the donkey.

One question this example raises is: what about a man who owns more than one donkey? Does he beat them all? Well, imagine a picture containing all male donkey-owners next to the donkeys they own. Heim suggests that for each man, we can zoom in one time for each donkey he owns. In other words, Geach’s sentence means something like the following:

Every minimal situation involving a man and a donkey he owns represents a case where the man beats the donkey.

This is brilliant solution to the problem, but I think there might be a small problem in Heim’s analysis. Consider the following modified version of Geach’s sentence:

Every man who owns exactly one donkey beats it.

This sentence intuitively means something different than Geach’s original: for instance, we no longer care whether men who own two or three donkeys beat them, we are only interested in the men who own one donkey total. However, here is what I think Heim’s analysis would yield for this sentence (since the only difference is changing “a” to “exactly one”), with the new part underlined:

Every minimal situation involving a man and exactly one donkey he owns represents a case where the man beats the donkey.

Notice, though, that a minimal situation involving a man and a donkey he owns will always have exactly one donkey. So, it’s no different from a minimal situation involving a man and exactly one donkey he owns. Crucially, even if a man owns four donkeys, once we zoom in on a minimal situation in which he owns a donkey, that situation will have exactly one donkey in it. The analysis seems to predict, contrary to fact, that the modified sentence should mean the same thing as Geach’s original sentence.

Comments?