Ain’t that interesting! An introduction to the linguistic, descriptive study of language

792014185319

Some background information: While perusing one of my procrastination websites (“look at all the funny photos!”) today, I saw the picture that I am reproducing on the left. My linguist senses tingling, I scrolled down to the comments section, curious how the other netizens would respond to the descriptive message conveyed in the picture. While some commenters seemed to accept a use of “can” that does not pertain to physical abilities, my attention was drawn to one particular post, whose author criticized another linguistic trend: the use of “literally” to mean “figuratively.” I felt compelled to write a response to said author, whose thinking about what’s “proper” or “sensical” in language reflects the widely-held beliefs that there is a “right” and “wrong” way to speak language, and that language must be “monitored” (by the educated speakers) so as not to “devolve” into an incomprehensible, messy blob. As our exchange got more involved and as my comments got longer, I decided to post an edited and expanded version of my argument to my website. I am doing so both to create a handy self-authored reference for the future and to organize my thoughts on this issue, which–as a linguist–I feel quite passionate about. In the discussion that follows, I am deliberately avoiding using many technical terms, linking mostly to second-hand research and discussions, and at times simplifying issues that are in reality more complex, because my intention is to challenge some traditional misconceptions about language by encouraging non-linguists to ask questions that they may not have asked before or that they may have shrugged off as irrelevant. As such, I hope to provide an accessible introduction to the descriptive, empirical study of language and to convince others of its value. So if you’re a linguist: Great! Please read on and leave a comment if you want to add or contest something. If you’re not a linguist: Great! I wrote this especially for you and I would also very much like to hear your thoughts.

First, as any introductory linguistics instructor will likely tell you on the first day of class, it’s important to introduce the terms “prescriptivism” and “descriptivism.” While some argue that this dichotomy is reductive or problematic, I think it is worth discussing. Prescriptivism, in general, refers to the idea that language “should” behave in a certain way and that some forms and usages are more “correct” or “proper” than others. These usages are essentially conventions that are especially valued in writing, in formal contexts, and among more educated speakers. Most of us have been exposed to prescriptivism in school, usually when we were taught explicit “rules” about our own/first language, such as “don’t use double negatives” or “don’t use ain’t” (note that many prescriptive rules are prescriptions against a particular usage or form). The fact that prescriptive rules must be explicitly taught is important, because it suggests that these are not innate rules of our language, but rather socially-prescribed linguistic behaviors. In contrast to prescriptive rules, linguists argue that there are descriptive (and, some but definitely not all would further contest, innate) rules. Descriptive rules are the kinds of rules we learn implicitly as babies and young children, such as the English rule “Add an -s to a verb stem if using third-person singular pronoun he/she/it.” Most linguists therefore strive to describe the actual, observed linguistic behaviors of speakers of a language and to explain how and why these forms or usages emerged historically and socially. To further illustrate the difference between descriptive and prescriptive approaches–and hopefully, to convince you that prescriptivist “rules” are quite flawed– let’s take a closer look at a few common prescriptive recommendations.

Let’s start with double negatives. As you probably know, many English speakers claim that double negatives should be avoided because, mathematically speaking, two negatives make a positive. There is a number of problems with this claim, but I will start with a historical counter-argument, because an examination of older texts written in English reveal that, since at least Old English (400s – 1100s AD), English speakers have commonly used constructions with double–or even multiple–negatives! This could result in a sentences like “No one would never want none of those cupcakes,” which would be actually quite “normal” (or “unmarked”) in Old English and which would likely be understood perfectly well by Old English speakers… just like double negation poses no actual challenge to modern English speakers, who certainly know what I mean when I say “I don’t want no cupcakes.” In fact, multiple negation used to be considered the “right” way to talk/write, and only recently (historically speaking) have some English speakers started discouraging these constructions, by making appeals to mathematical rules. But there are problems with the mathematical explanation too. For example, why is it that in explaining the “illogicality” of double negatives, we assume multiplication? Yes, -2 multiplied by -3 gives us the positive number 6. But that’s not the case for, say, addition, because -2 added to -3 results in -5. These two negatives do make a negative! So the assumption that underlies the prescriptive double negation “rule” is pretty shaky from a logical point of view. Also, if two negatives inherently made a positive in a language, then it would be really difficult to communicate in many other languages, including French or Polish, in which double and multiple negation is the norm or the standard construction. And as a “native” speaker of Polish I can tell you that sentences like “Nie chcę nic nikomu powiedzieć” (“I don’t want to tell no one about nothing”) never struck me as weird or illogical.

Another prescriptive sentiment that I have heard being made about English is that “ain’t” should be avoided because it is “improper” and/or because it is not a real word. But how can something with an agreed-upon pronunciation, written form, usage, and definition not be a word? After all, ain’t those the very characteristics that make up a word? I think that one reason for treating  “ain’t” as some kind of a monstrosity is that, unlike the irregular verb “to be,” it does not conjugate for personal pronouns. So, while we change the form of the verb “to be” depending on the pronoun that it follows (“I am” vs. “you/they/we are” vs. “he/she/it is”–but even here there is some overlap!), “ain’t” always remains the same (“I/you/she/etc. ain’t”). This invariability may seem strange, but it is in fact a common feature of English verbs, since we can use the same form of “sing” for almost all pronouns, except “he/she/it,” when we add an -s. Moreover, we have other contractions that behave like “ain’t.” This includes “don’t,” which is also almost invariably used with all pronouns (“I/you/they/etc. don’t,” but again “he/she/it doesn’t”). But wait! Is “ain’t” even a contraction? It may not look like one because, unlike “aren’t” and “isn’t” — which are clearly contractions of “are”/”is” and “not”–“ain’t” can’t really be separated into “ai” and “not,” since “ai” indeed is not a recognizable English word. But some historical sleuthing can again shed light on this linguistic mystery. As it turns out, originally “ain’t” really was a contraction of “am+not” and was pronounced/spelled as “amn’t.” However, due to phonological changes that took place over time, the “m” was deleted, because I think we can all agree that “amn’t” is kind of hard to pronounce. So the contraction “amn’t” became “an’t.” Now, around the same time, in English varieties where “r” is regularly dropped (as in contemporary standard British English) the “r” in “aren’t” was deleted, resulting in “an’t.” A situation thus arose where one form, “an’t,” could denote multiple meanings: “am+not” and “are+not” (this is a fairly common historical change). Once “am not” and “are not” started to be pronounced/written as one and the same, “an’t” acquired a more general meaning of “BE+not” and eventually turned into “ain’t,” which is still widely used among many English speakers today. However, since “ain’t” today is associated with non-standard, stigmatized varieties of English, like African American Vernacular English (which is by no means used by all African Americans) generally considered “broken” English, and such stigmatized varieties are shrugged off as standard English with mistakes, even though they too are linguistically complex, systematic and in no way less “logical” than standard English.

For our last example, let’s examine “literally.” As you may have been told by your friends, teachers or creators of Internet comics, people shouldn’t use “literally” when they don’t mean “in a literal sense” or when they mean “figuratively,” because it’s confusing/wrong/improper. I can understand English speakers’ reluctance against this newly emerging use of “literally” because–as comics authors may again tell you–it seems to make the distinction between “literally and “figuratively” obsolete. But, once again, there are some holes in this argument. Firstly, when I use “literally” in a sentence like “I could literally eat all those cupcakes,” am I really using the word “literally” instead of “figuratively”? As some linguists have pointed out, “literally” now functions more as an intensifier, so the new meaning is closer to “really” or “extremely.” So it is unlikely that someone would ever get confused by my use of “literally” and “figuratively,” since they have quite distinct meanings and since context almost always provides enough clues for you to know which meaning I am using. Why does it even matter if we use “literally” to mean “really”/”figuratively”/whatever, if we can all understand from context that we are simply exaggerating, often for humorous effect? After all, language users play with word meanings and change them to achieve particular goals all the time, so this newer use of “literally” is not really that strange or surprising. But maybe it’s just confusing to make one word mean something another word already means. After all, we already have the words “really” and “figuratively,” so why add a new meaning to “literally”? Well, let me ask another question: Why not? We do it all the time! Just as one example, let’s look at a metaphor, in which we essentially add to word X some new meaning that is already expressed by or associated with word Z. For instance, you can say “Dave is a leech” and I know that you mean something like “Dave is scrounger.” So would you argue that we shouldn’t use “leech” in this context because we already have the words “scrounger” and “slacker”? The metaphorical use of “leech” is not illogical as long as I can easily infer what you mean by “leech” in a particular context. And the emphasis on context is indeed a key tenet of descriptivism (and the focus of pragmatics research), because language rarely if ever occurs in a vacuum, and because we rely on context in our conversations all the time. If we look at language outside of context, then it does indeed get confusing. Just think of all the possible meanings of a simple verb like “to run”: “to sprint” (“to go for a run”), “to do” (“to run some errands”) or “to produce/air” (“to run a news story”) are only the tip of the iceberg, to use another metaphor!

As I hope to have demonstrated through the above examples, prescriptive recommendations, like “you shouldn’t use double negatives” or “you shouldn’t use ‘literally’ in a figurative/exaggerated way” are often based on flawed assumptions about how language works and how people actually communicate. As such, I argue–as many other linguists do–that prescriptive “rules” are actually conventions and they do not necessarily make language more “logical” or “correct.” Often, they stem from misconceptions about language (ex. the belief that language works like math), from analyzing one language from the perspective of another language (ex. the “rule” against splitting infinitives is based on the structure of Latin, in which an infinitive form is always a single word), from an individual’s aesthetic preferences or pet peeves (ex. irritation with people who use “like” many times in a sentence or in a conversation), or from a general aversion towards language change. Indeed, many prescriptions are a reaction against language change, or what people perceive as language change, because such changes are considered as corrupting the language or in some way making it less logical/pure/correct.

So while descriptivists and prescriptivists will agree that language evolves, they hold fundamentally different assumptions about language change and have different agendas. Prescriptivists generally believe that language change can and should be evaluated as either “good” or “bad,” and they and are mainly concerned with preserving and promoting those linguistic conventions that they see as part of “proper” usage. In contrast, descriptivists try to eschew such subjective evaluations because they are interested in studying language empirically and because they believe that many of our evaluations about language reflect and promote racial, economic, religious, and other divisions in society. That is,  prescriptive conventions describe a (somewhat idealized) standard language variety that is considered prestigious at a particular point in time and that is most commonly used in formal or public contexts and in writing. And it can indeed be efficient to have some pre-established recommendations and stylistic conventions for writing or speaking formally (as politicians, news broadcasters or academics do, for example), so most descriptive linguists are not arguing that all prescriptive rules should be abandoned or that “anything goes” in language. What they are arguing is that these rules are socially-constructed and, since they are most commonly learned through explicit or official instruction, those individuals who have limited access to education or who have access to lower-quality education are linguistically discriminated against and economically disadvantaged. As a result, prescriptive assumptions perpetuate social and economic power imbalances, further marginalizing groups that have been historically oppressed or denied access to power (Disney movies are complicit in this too). But that is a topic for a whole book… And indeed, there are some great books about it, if you’re interested.

So why is prescriptivism such a major force in today’s society’s? I think one reason is that, although many contemporary linguists are descriptivists, it’s prescriptive linguists who get the most mainstream exposure and who are best known and liked by the public. Which is partially why prescriptivism is so popular among the general public. The other reason has to do with the history and nature of linguistics as a field. First, linguistics is a relatively young science, unlike, say, physics. While we have known for a long time now that the Earth is round, the systematic, objective study of language as a field is at best a couple of centuries old. It therefore does not have the same repute as the “hard” sciences do. Moreover, language is (at least, on the surface) different from physics in that it is a socio-bio-cognitive phenomenon–and linguists explore all of these avenues–and I think that this makes some people think that language can’t be studied scientifically or empirically (as a psychology major in college, I had an acquaintance who believed that psychology too was a “soft” science and that, as such, it could never be systematic and was therefore useless). Because descriptive (empirical) linguistics is relatively young and prescriptivism has probably existed ever since humans invented language, prescriptivism has become deeply ingrained in our educational systems. Consequently, we grow up believing without question in the idea that we “should” or “shouldn’t” pronounce something in a particular way or that we should avoid certain grammatical constructions because they are “improper.” And while, aesthetically or situationally speaking, some forms or usages may be more appropriate than others (as in poetry or academic writing), a subjective judgment is not necessarily the most insightful or the most productive way of thinking about language. Consequently, a person who says “ain’t” or who frequently uses “like” is not less intelligent than an individual who makes sure to never split his or her infinitives because adherence to prescriptive rules is not necessarily an index of one’s mental acuity but is most commonly related to external factors (like socioeconomic background) or to internal motivations (like solidarity with a particular speech community). Unfortunately, it’s only if we pick up a book written by, say, John McWhorter or take a linguistics class in college that we are exposed to this crazy thinking about language that emphasizes external variables and that doesn’t use words like “should.”

But even as a self-identifying descriptivist, I want to say that I think everyone “should” learn a little about descriptivism. Just as I find value in examining the aesthetics of language by studying or writing poetry and literature, I think it’s of value to those who consider themselves to be prescriptivists or who have been exposed to prescriptivism their whole lives to learn about descriptivism. In attempting to analyze language from a scientific, empirical point of view, (descriptive) linguistics allows us to challenge some of our own long-held beliefs about language and society. It also allows us to understand WHY people use language the way they do, even if their language use may at first seem chaotic or illogical. Of course, everyone has personal preferences and pet peeves about language–I certainly do too!–and everyone adjusts their language based on context and on the agreed-upon conventions.  But it’s important to distinguish between what you prefer from an aesthetic point of view or what is a social convention from what is an empirical, structural rule or feature of a language. Moreover, it’s important to be aware of the socioeconomic bases of certain linguistic forms and usages, as our attitudes toward them can be complicit in perpetuating social stereotypes and other injustices.

Finally, it is important to remember that language is fun and bendable–we play with it all the time! A language like English has existed for a long, long time and, while it has changed and “bent” drastically over the centuries, it’s still a complex, precise, and beautiful system that rarely ever fails us. So let’s try to enjoy it a little bit more instead of worrying about it so much!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s