Posts

Protecting Our Space

According to a new survey, 20% of American college students now say it is acceptable to use physical force to stop a speaker from making “hurtful or offensive comments.”  Catherine Rampell, in the Washington Post, reads this as a growing rejection of the principle of free speech.  I think she’s right.  It does seem that Americans are increasingly willing to accept censorship and silencing.  Or at the very least, they are more willing to take active measures to protect their discursive space.  Why?

The first and most obvious answer, I believe, is the dominance of consumer logic.  The world of late capitalism is ruled by “choice.”  Through our consumption habits, we are expected to construct our own reality.  I can customize my home or outfit—sculpt it to the exact image I want to project—so why not my information stream?  Of course, as Cass Sunstein has argued, exposure to opposing views is a necessary social good.  Consumer logic undercuts such thinking, though.  It sidelines the expert (Sunstein), and long-term (democracy) in favor of immediate, emotional satisfaction.  When we think as consumers, therefore, it is only logical to censor and silence.

So people shut down speech because they want to, and believe as consumers, they should get what they want.  Where does this desire to silence originate?  Of course, the alien is always disconcerting.  Still, this new survey data indicates that people are increasingly troubled by opposing views.  Or perhaps we are simply more attuned to them.  Perhaps because of the homogenization of our discursive space the alien sticks out, demands our attention (and challenge), more than it once did.  When I spend most of my time in a filter bubble, the sliver of the outside world that sneaks through is bound to be upsetting.

I do wonder though if there are other factors at play. This is very speculative, but I wonder if the structures of belief which we use to define self and world are shakier than they once were.  In our multicultural, multivocal world even the most closeted thinker must know—at least on some level—that other views are always out there.  Perhaps in earlier, less connected times these views were more distance, and hence less threatening.  And/or perhaps our relation to knowledge has changed.  Perhaps we can say that with modernity and postmodernity some sort of ground has disappeared, and this makes us fundamentally insecure.

We can imagine, for example, a true believer, someone so confident in his views that opposing beliefs are seen only as objects of amusement. Such would be the position of a medieval Christian laughing at a Hindu, perhaps.  The Hindu’s gods are so distance, and the Christian’s understanding of how the world “is” so solid, that the former’s religious claims cause no offense.  Now compare this to students trying to shut down a conservative speaker, Ben Shapiro at Berkeley say.  They find it intensely offensive that Shapiro claims there are only two genders. Shapiro’s views fundamentally hurt these students.  Why?  Why can’t they just laugh at him?  Certainly, they “know” that gender is a spectrum, a social construct.  They know it as certain as the medieval Christian knows the true nature of God….

My point is that it seems that what it means to know has changed.  On some (subconscious) level we have internalized the idea that knowledge is relative, rhetorical and shared.  Leftwing activists need Ben Shapiro to acknowledge gender is a spectrum because, simply put, we can’t be sure of anything anymore.  There’s an abiding sense of unreality, a feeling that everything is up for negotiation.  The negotiation is public, but the outside works its way in, shaping how the individual thinks.  This would explain why we see students chasing conservative speakers off campus.  And why we see Trumpian attacks on the “lame steam media.”  In both cases the principle is the same: I want (or need) to believe the world is X.  When you say it is Y, it makes my life harder.  I must therefore stop you from saying Y.

In short, in a world of excess—of connection and unbridled choice—we recognize that everything is shared, everything is unstable.  We must take an active role in constructing our reality.  And this means being constantly on guard against threats to that reality.

Grad Unionization or No, Pitt Needs More Financial Transparency

Graduate student unionization.  Of late, it seems that those of us engaged in funded graduate study are caught between [insert Game of Thrones reference].  As a student at the University of Pittsburgh, for example, in the past couple weeks I’ve received an email from Provost and Senior Vice Chancellor Patricia Beeson laying out the university’s official position.  I’ve also been following the Intellectual Poverty blog of one Andrea Hanna, a graduate student in communications at Pitt, and supporter of unionization.  Let’s parse the claims within, and see what’s going on.

Beeson basically claims that education (including networking, the development of practical skills, etc.), rather than financial compensation, is the primary point of graduate study.  As such, she doesn’t want to foreground financial concerns.  If there are any issues with the current funding system, she argues, they should be addressed piecemeal—on a departmental basis—rather than through the broader framework of unionization / collective bargaining.

Hanna, on the other hand, claims that the university is starving her to death!  Having come to Pitt from Northern Ireland, she finds it very difficult to get by on her graduate stipend.  Among other measures, she’s had to resort to handouts from a local foodbank.  Her blog is dedicated to tracking this “intellectual poverty” and her attempts to overcome.

Now, I’m not going to make a claim for or against unionization.  I would like to say, though, that my experience as a grad student at Pitt (five years as a PhD candidate in the English department), bears little resemblance to Hanna’s.  Still, I respect what she’s doing.  I think transparency is important: we need to get our (financial) experiences out in the open so we can have an honest debate about what problems exist and how they can be solved.  In short, we shouldn’t cede to the administration’s desire to obscure financial concerns.  As such, let me relate my experience.

My official job title at Pitt is “teaching fellow.”  According to Pitt’s Graduate Studies website, this means I make $9,590 per term, or $19,180 per year (plus health insurance).  In exchange, I teach one section of freshman composition (or a similar course) per semester.  My class meets for 3 hours a week.  As I’ve taught this class before, my out-of-class preparation time is limited—probably about 5 hours a week (this includes meeting with students, reading/responding to student emails, grading blog posts, etc.).  Also, five times a semester I read a batch of student essays: this is pretty time-intensive, taking about 8 hours per batch.

So, over the course of a fifteen-week semester, I work about 160 hours (45 in-class, 75 preparation, 40 grading).  For this I get paid $9,590, or about $60 per hour.  In the English department, funding along these lines is guaranteed for at least five years.

Unlike Hanna, I find I am able to live quite comfortable on my Pitt salary.  I have a roommate, and thus pay only about $700 per month for rent and utilities (gas, electricity, cable/wi-fi).  I buy groceries at Trader Joe’s, where I spend about $250 a month.  For recreation I do regular millennial stuff: drink craft beer, go out to eat, see bands.  I own a 2003 Toyota Corolla (so no car payment), and though I have student loans, am fortunate that they are in deferment (hence no loan payments).  So in short, despite making a relatively low wage, as a single, rather frugal person, I find that almost every month I have money left over.

I don’t want to imply that my experience is typical.  In fact, after reading Hanna’s blog, I recognize that it’s not.  As such, whether grad students unionize or not, I feel that the university needs to do a better job of making salary / work information publicly available for comparison.  How much, for example, does Hanna (or a biology PhD) make per hour of work?  How much do they actual bring home, and were they appropriately informed of this situation before taking a position at Pitt?  These questions obviously inform whether unionization is needed.  Likewise, if the university refuses to provide such information, one must conclude that unionization is indeed needed.

Of course, financial transparency should also extend to faculty members.  How much do faculty and staff in the English or communications department make, for example?  How does this compare to those in the Business school?  At many colleges this information is publicly available.  Not at Pitt.  Because of her rank as one of the highest paid university employees, we know that Provost Beeson earned $492,133 in 2016.  Hanna claims to make $17,500 per year.  This indicates that Beeson is approximately 28 times more valuable to the university than Hanna.  Is this true?  I don’t know.  I do feel, though, that we should get all the salary data out in the open, so we can properly debate such claims.

So, in short, from what I’ve seen, any claim that graduate students at Pitt, as a whole, are “impoverished” is a bit ridiculous.  But some student employees obviously have grievances.  I call on the university to compile and make available detailed salary and work information, so that the members of the Pitt community can decide if these complaints are valid.

ESL Manifesto

As someone who has taught writing to both native English speakers and second-language learners, I’ve noticed something of a “two cultures” situation between these ventures.  The former is dominated by English departments, generally, while the latter is dominated by linguistics.  These two groups have different ideas about how writing should be taught.  Lately, I’ve been thinking about where my own views fit in.  To help figure this out, I’ve drafted a short statement of how I teach writing, a “this I believe” statement, you could say.  Here it is:

  • A writer improves by writing.  The job of the writing teacher is to create an environment in which the writer has to write.
  • To write is to create meanings (interpretations of our shared world) and make those meanings understood.  As such, the student writer must receive constant feedback as to how her meanings are received.
  • Increased linguistic sophistication is achieved when a writer is forced to create meanings beyond the forms on which she normally relies.  The learning environment should be structured to encourage such movement.
  • Conscious knowledge of the writing act (grammatical rules, names for textual features or stages of the writing process) is useful to a limited extent.  The introduction of such knowledge should always be subordinate to active meaning-making.

Reviewing my claims, a number of things stick out.  First, as is perhaps apparent, I make no distinction, on a theoretical level, between teaching native and non-native speakers.  In all cases, I’d argue, teaching writing is a matter of triggering the innate human tendency towards meaning-making.  We learn a second language, I’d say, the same way we learn a first language: by doing it.

Second, my sidelining of the conscious elements of the learning process might surprise some.  Now, I admit that some knowledge of basic textual forms is beneficial.  For example, the idea that it often improves uptake to say what you’re going to say, say it, then say what you said (introduction-body-conclusion) is something everyone should know.  In my experience, though, by the time they get to me (a university writing instructor), most students already know such rules, at least in the abstract.  If there’s a problem, it’s that they can’t actualize this knowledge.  This means that presenting ideas about how writing is or should be is usually not an efficient use of class time.  Instead, the students should be the ones producing the content.  They should be doing the writing and thinking, not the teacher.

Third, the ecological nature of writing should be respected.  Smaller units of discourse are inevitably shaped by the larger units of which they are a part: a word gets its meaning from a sentence, a sentence from a paragraph, etc.  This means that you should be very careful about decontextualizing language units.  Consider the sentence.  If you want better (richer, more technically correct) sentences, you can’t focus solely on individual sentences.  Instead, each sentence must be engaged within a larger discursive structure.  This is what I’m trying to get at when I speak of students creating “meanings.”  Meaning, as used here, is a complete idea, projected into the world for a purpose.  This purpose, in turn, shapes each sentence, paragraph, etc.  By focusing on meanings (instead of decontextualized units) students learn to engage in the dialectic between part and whole which is inherent in the writing act.  This helps them become better writers.

Finally, the value of difficulty should be acknowledged.  Students must write, but to grow, they must do more than just write: they need to move beyond the forms they typically rely on.  This implicates content.  Students need to be forced to write about things they haven’t written about before.  And these topics should be complex (relevant to the student’s current level).  Even if students are writing every day, I’d argue, their growth will be limited if the topics they are writing about are too simplistic or too familiar.  In such cases, they’re just displaying their abilities, rather than expanding those abilities.

That’s it.  The above ideas are largely draw from (American, English-department based) composition theory.  I honestly don’t know how they would be received in ESL circles.  But I welcome any discussion the above might spark.

On Attunement

Attunement.  A word I often deploy (e.g., my recent claim that writing instruction involves, at heart, the cultivation of an “ethics of attunement”).  Lately though, I’ve come to think that I’ve been using this term somewhat thoughtlessly.  In the following I’d like to map out attunement’s various meanings, and in the process, argue for a redefinition that foregrounds attunement as a conscious act.

At the most basic level, attunement implicates sound.  A group of individuals is “in tune” when the members of that group vibrate at the same resonance or pitch.  Attunement is the act of moving into alignment with the group’s shared frequency.  There’s an interesting mix of conscious and unconscious, mental and material elements at play in such a act (which perhaps explains attunement’s recent popularity as a means to describe writing and rhetoric).

Attunement, it seems to me, can be driven by an articulable desire—intent to attune, we might say—or it can be totally outside the realm of conscious control.  It can be a rational, step-by-step process of experimentation and adjustment (as when a singer flexes his throat muscles, trying to match a note on a scale) or it can be something that the body seems to do completely on its own (as when we find ourselves becoming nervous around a nervous person).  There’s always some bodily or material aspect at play.  Attunement can never take place completely in the mind or “on paper.”  We’d never say that one formal equation, for example, is in tune with another.  Relatedly, attunement implicates the emotional or affective.  Within psychology, I am told, attunement indicates how in touch one is with the moods or emotional states of another.  To be attuned is to register, and respond, to those states.

So attunement casts a wide net, indicating the ability, either conscious or unconscious, of one entity to adapt (by physically adapting) to another.  With its hint of emotional receptivity, and related ability to capture so much beyond the logical, the term has become commonplace in rhetoric and composition.  Like “care” or “hospitality,” attunement is almost always used in a positive sense.  If we dig a bit deeper though, we realize that attunement is in fact ambiguous, both morally and otherwise.  To be able to intuite that your friend is upset, and display sympathy, is attunement, sure.  But Hitler, for example, also showed a great degree of attunement in his ability to register the energy of German crowds and replicate that energy in his own bodily movements.  So attunement can be for good or ill.

As noted, I have recently written of composition’s ethics of attunement.  In that discussion I used attunement to capture composition’s commitment to teaching students how to know what needs to be done in a given rhetorical situation.  This process is never totally cerebral: we have to be able to “read” the emotional tenor of an audience, for example.  It’s never completely outwardly focused either: we have to constantly monitor both the situation and ourselves, wary of the ways in which our biases and predispositions shape what we see and feel.  My original argument was that via this dual focus we can help bring our thinking and action in line with what is demanded by a situation.  This ability to attune is ultimately what makes a good writer.

Integrally, attunement, as described above, always involves an act of judgement.  I obscured this fact in previous discussions and would like to make it clear now.  As commonly used, attunement—because it is so extra-rational—seems to imply that one has no choice in the matter.  If the crowd wants a speaker to stroke their anger, this thinking goes, the attuned rhetor is one who provides it.  To go against the crowd, to keep breathing regular and heartbeat steady when all around you breaths and beats are racing, represents a lack of attunement, yes?  As the term is conventionally used, this seems to be the case.

We don’t want to teach student-writers to be like Hitler (obviously).  So how can we understand attunement in a more sophisticated manner?  We need a force which checks attunement.  What force though?  To what is the process of adaptation responsible?  The answer, I’d like to suggest, is the ideal.  The ideal is the good, that towards which we strive and by which we measure our practice.  It is a verbalizable statement (not a feeling or “vibe”) and necessarily abstract.  It is actualized via stories which illustrate the application of the ideal in specific contexts.  For example, one might say that she is driven by a commitment to “Justice.”  She might know stories– specific, context-rich examples– of when justice prevailed and when it did not.  Given a situation, it is her responsibility to measure that situation, and compare it to her set of stories.  Though this process, she can know how to think and how to act.  Should she allow herself to become attuned to the fury of the crowd—to channel and embody that passion—or should she remained detached?

This analysis adds a third element to what I previously defined as the dual motion between self and world.  An “ethics of attunement,” we can now say, involves triangulation among self, world and ideal.  Unlike pure bodily or emotional attunement, this is by definition a conscious process. It involves thinking.  Of course, there are no guarantees here.  We must “read” ourselves, the situation and the various ideals implicated, and can never be sure that our reading is right.  In this regard, judgement in the face of actual choice imbues every element of an ethics of attunement.  As suggested, this articulation marks something of a break with previous definitions of attunement, which focused too heavily (in my opinion) on the unconscious and bodily.  It is necessary though, I believe, if we are to understand attunement as a moral act.

Putting the Beatles in Context

The centerpiece of my writing class is always the lived experience of the student. I try to stress though that our experience of the world is never disinterested or given. Instead, what we see and hear and feel is always shaped by various forces. Here’s a lesson plan that seeks to illustrate this point.

Note: this is for a 75 minute class.

Lesson Plan:

To begin, I had my students listen to the Beatles’ epic “A Day in the Life,” and do a short (~7 minutes) freewrite describing the experience. My prompt asked them simply, how does this song make you feel? What does it make you think about? Why?

* This activity could utilize any piece of music, as long as 1) the students are not overly familiar with it and 2) it has a substantial entry on Wikipedia. “A Day in The Life” works particularly well, I found, because of its challenging nature and the well-documented (and interesting) circumstances of its composition.

After freewriting, I asked if anyone knew anything about this song (some recognized it was the Beatles, but no one knew its name). I then told them the name, and asked them to go to the relevant Wikipedia page and do some research. “Find out where this song comes from,” I asked. I gave them 15 minutes to read about the song. Though instructed to start with the song’s Wikipedia page, they were encouraged to follow whatever research path grabbed their attention.

We then listened to the song again, and did another freewrite. My prompt this time asked them to note any differences in what they heard or felt or thought. In short, I wanted them to reflect on how background knowledge changed their experience of the song.

Theoretical Justification:

I know from my own experience that learning the context and compositional background of a piece of music (or film or text) inevitably alters how I engage with that work. I was hoping that my students would experience the same effect, and that reflecting on those changes would make them more aware of how knowledge (and context in general) shapes their understanding of the world.

I also feel that engaging deeply with an object (especially a disruptive one like this song), and attempting to share that experience, is a fundamentally beneficial activity for young writers. It forces them to put their subjective experience in symbolic form. It’s also useful for them to see how others make sense of a shared object. This tracks with one of the main goals of my class—to better understand how we see the world, and how this differs from how others see it. Though I didn’t focus on it much, the varying research paths taken could also provide fruitful grounds for discussion.

Analysis:

After our second freewrite, we spent ~45 minutes discussing what we had written. I started off by having some students read their first freewrite aloud. Their responses were varied and fascinating. Some students wrote of being “confused” and “scared” by this “trippy” song, with orchestral parts which reminded them of the score to a horror movie. Others wrote about how some parts (Paul’s verses, in particular) reminded them of childhood. The dominant tendency, after doing some research, was to focus more on the lyrics and the story behind the song (IE, an acquaintance of the Beatles dying in a car wreck). This, predictably, lead to the students hearing an increasingly plaintive element.

Perhaps the most telling response was from a student who wrote about how at first, the unusual structure of the song caused her “anxiety.” This anxiety was relieved once she did some research and “knew what the song was about.”  This response says much about this student’s relationship with novelty.  It is my hope that after exploring this relationship in the classroom, she’ll be more inclined to take note of it in other contexts.

Conclusion:

This was a fun exercise, and I certainly saw changes in my students’ experience of the song.  I’m inclined to believe though that to really facilitate the kind of inter-contextual transfer I’m seeking, it may be necessary to have the students draw some generalizable conclusions from the activity. Towards that end, perhaps this in-class activity could be followed by an essay assignment in which students discuss this “experiment” and what it says about the relationship between knowledge and lived experience.

In group discussion I’d also like to put more emphasis on what the differences noted “mean.”  For example, the song’s background story made one student feel less anxious.  What does this change say about the importance of narrative coherence in her world?  Certainly this question was implicit in our discussion; if I were to teach this activity again though, I’d like to make it explicit.

Twitter and the Trump Tapes (A Lesson Plan for Freshman Composition)

I recently had a successful class session which involved evaluating tweets made in response to the infamous “Trump tapes.”  I thought I’d share my lesson plan, in case any other teachers are interested.

Some background: I teach freshman composition at a large public university. The theme of our course is “thinking about thinking,” with the underlying premise being that this sort of (self-)reflection is necessary to be a successful writer.  We do a lot of activities which involve trying to understand the worldviews (or ideologies, you could say) which underlie certain claims.  This activity is in that vein.

Note: This is designed for a 75 minute class.

Background Material:

Prior to this class session, we watched and discussed this interview with Professor Nicholas Epley, a behavioral psychologist from the University of Chicago.  In it he discusses “egocentrism” and how individuals innately view the same event (he uses 9/11 as an example) in different ways.

My class also uses a standardized heuristic to critically analyze statements.  We created this together and have termed it “The DACO method.”  Here is a handout which explains this method and provides an example.  In short, it involves taking a statement or belief and tracing the Definitions and Assumptions which underlie it, the Consequences to which it could lead, and where it fits in a range of Other Opinions.  For this lesson plan to work, it’s not necessary that you use the DACO method.  If you do want to use it though, it may be useful to go over the above handout as a group.

Lesson Plan:

I began the class by breaking the students into groups of 2 or 3.  I then explained that we are going to watch a video that illustrates Epley’s point about the subjectivity of interpretation.  We then viewed this CNN report featuring the video in which Donald Trump is caught making various vulgar comments about women.  [Note: this version of the video is edited slightly, but still pretty offensive.  You may want to issue a “trigger warning.”]

After watching the video, I distributed this handout, which lists 8 tweets interpreting said video.  Tweets, of course, are very short, which makes them neat encapsulations of the writer’s worldview.  Using our DACO method, we then worked together as a class to interrogate the first tweet.  The goal was to try and understand “where the writer is coming from,” how they see the Trump video (and the world at large) and how we can learn to negotiate with such a perspective.

The first tweet states –> If you’re like ‘that’s just men being men’ after listening to the #Trump Tapes it’s seriously time you get some new male friends.

My class discussed how “men” and “friends” might be defined in this case.  We then discussed the assumptions at play, particularly how this writer likely views Trump’s comments as unusual and wrong, and anyone who engages in such talk as shameful.  Regarding consequences, we decided that this writer wants less vulgar talk because it’s “offensive,” meaning it upsets certain people.  Going deeper, we realized that the writer may believe that such talk leads to physical violence.  He or she may therefore view their tweet as a part of an effort to reduce such action.  Finally, we discussed a range of other opinions.  Opposing opinions can often be generated, we found, by challenging the writer’s premises.  For example, if an opponent could show that vulgar talk doesn’t lead to violence, the argument implicit in this tweet would fail.  Such a belief (that vulgar talk doesn’t cause physical violence) is an example of an “other opinion.”

After analyzing the first tweet as a class, each group worked separately to analyze the other 7 tweets.  After about 20 minutes, they were asked to present their findings to the class, facilitating another group discussion.

Conclusion:

Overall, I found this to be a fun and intellectually lively activity.  The tweets examined come from a variety of perspectives; through critical analysis the students had the opportunity to dwell in those perspectives, enriching their understanding of the other (and the way s/he thinks and writes).  Because tweets are so short, such analysis requires both creativity and attention to the nuances of language.  Also, by examining the intended consequences of each tweet– which I frame as “what the writer is trying to accomplish”—the students began to think about rhetorical tactics.  This are all valuable outcomes, in my opinion.

m

Students ≠ Kids?

As noted many times on this blog, I work within the philosophic tradition known as American Pragmatism. William James proposed this tradition’s core principle—the pragmatic method– as a way to resolve seemingly intractable “metaphysical questions.” In short, it holds that to know an object, we should examine that object’s effect on other objects. I’ve found that this simple move—tracing the consequences of a belief or statement or action— can prove remarkably useful in clarifying my thoughts. In the following I’d like to demonstrate the pragmatic method in action. Specifically, I want to interrogate a commonplace I often encounter as a college writing teacher: students ≠ kids.

Before we begin, I need to clarify what I mean by “commonplace.” As I understand it, commonplaces are ready-made verbal formulas. They are the bits of distilled wisdom, imparted to us by our various communities, that help guide our actions.

Commonplaces are obviously important. They help us understand and interact with our environment. We therefore become very attached to them. This emotional investment can sometimes blind us to their actual nature. Me and my commonplaces get so close, James would say, that I start to see them as ontologically true. And that’s a problem.

Consider a US Marine, guided by the belief that as a Marine, she is “always faithful” and “first to fight.” These commonplaces shape the Marine’s actions. Hence, they’re important. If the Marine is a pragmatist though, she recognizes that “always faithful,” though it may (seem to) fit her to a T, remains a mere verbal formula. It’s been applied to her from without and therefore remains open to revision (or even rejection). It’s true, sure, but true only because of what it does in context. In other contexts, or for other Marines, it may be false.

Now let’s turn to a pedagogical example. College students are adults, not children (students ≠ kids). This is pretty much a truism among progressive educators, its utterance sure to garner a round of head nods at the conference or faculty meeting. Is it true or false though? Following James, we can find out by tracing its consequences.

So what does students ≠ kids do? Well, first it encourages teachers to “take the training wheels off,” to make students responsible for their own learning. A long line of progressive educators, from Maria Montessori on down, would suggest that this is a positive move.

So our example can do positive work. If we wish to follow James though, we must keep in mind that this statement is in no way ontologically true. We can’t, for example, prove empirically that college students are adults and not children. Instead, we must view this statement for what is it is—a community generated object acting on other objects. Among certain constellations of objects, its consequences may be other than positive.

Consider a common situation faced by writing teachers. It’s near the end of the semester. You’ve worked through a carefully designed syllabus, given your students every opportunity to think critically and learn and grow. Despite this, some students remain mired in bland thought and language, comfortably ensconced in the status quo. This is a frustrating moment. And for some teachers, reminding themselves that their students are NOT fully formed adults may be a potent ameliorative tactic. It can help the teacher read more generously, find new reserves of patience. In short, writing teachers can’t expect college freshman to think and write like Theodor Adorno. Conceptualizing college students as in-process, as plastic, in short, as children, may help teachers come to terms with this fact.

So, here we have a commonplace (students ≠ kids) that in certain contexts does positive work. In other contexts, the inverse (students = kids) does positive work. From a Jamesian perspective, we can therefore say that this commonplace is both true and false. To make this determination we consider the thinker, the context and what the commonplace does for that thinker in that context. In short, we must use the pragmatic method.

Admittedly, such an analysis makes an implicit moral claim. It suggests that it’s better (more logical, more socially useful) to think of our beliefs as tools rather than objective descriptions. As tools, we’re free to change said beliefs as circumstances necessitate. We’re free to view our students as both adults and children, for example. William James makes a strong case that this is the best way to approach our world. It forces us to stay flexible, makes us more generous thinkers. I too think this is a good way to live and to think. Use of James’s pragmatic method can help nudge us in this direction.

Trump, Sanders & The Violence of Idealism

With the classic Beatles’ track “Revolution 1,” John Lennon, famously (and controversially) sends a mixed message about his support for violent revolutionary activity.  “If you’re talking about destruction yeah, don’t you know that you can count me out… in,” he sings.  Aside from being a prime example of Lennon as proto-punk, I think this juxtaposition says much about the nature of far-left politics.  In short, it suggests that political idealism, especially of the far-left variety, always contains an element of violence.  Relating this to current US politics, the question becomes, will Bernie Sanders supporters embrace this violence and vote for Donald Trump?

As I’ve written about before on this blog, all human activity takes place in the shadow of ideals— visions, however vague, of the way the world should be.  Those who profess radical political beliefs are particularly intimate with their ideals.  Ideals though, by their very definition, are situated in opposition to the world of actually existing human affairs.  This means that to embrace an ideal fully, to long for it and work towards its realization in the manner of a true radical, is to wish for the destruction of the actually existing.  After all, the real and the ideal can’t exist alongside each other.  One must give way.

So some level of violence is inherent in all idealism.  Likewise, on a practical level, a cursory review of the historical record reveals that indeed all (or nearly all) instances of revolutionary change are occasioned by destruction.  That the failure of the old is necessary for the new isn’t a particularly novel idea.  It terms of human psychology, it makes sense that things must get really bad before people embrace new options.  Simply put, if the old system is working, you’re not going to get revolutionary change.

This brings us to current state of US politics.  Imagine you’re an idealistic Bernie supporter (or maybe you actually are).  You look at the world and see inequality and oppression.  Things are bad.  Unfortunately, as the outcome of the Democratic primary shows, most people do not think things are bad enough to require revolution.  Instead, they demand only a tepid incrementalism, a politics which leaves the current elites, and the system by which they benefit, in place.  In short, the majority of the population is still tied to the real, thereby rejecting the ideal (they can’t exist together, remember)

So what has to happen for the majority to embrace a (leftist) ideal?  The answer, unfortunately for most happy-go-lucky idealists, is destruction.  For radical political change to occur, the system must utterly fail.  The real world must be shown to be degraded, incapable of supporting human flourishing.  In short, for things to get better, things have to get much, much worse.

According to the this logic, we can see how a far-left Bernie supporter could make a rational case for voting for Donald Trump.  As numerous experts have opined, a Trump presidency would be an unmitigated disaster.  The economy would collapse, international relations would fray.  By all indications a lot of people would get hurt, yes.  I’d like to suggest though that this violence—this unmitigated human suffering—is part of the logic of the ideal.  A Trump presidency, by this thinking, is desirable simply because it would be so terrible.

Of course, much radical literature supports my claim.  Mao and Stalin (and ISIS and Al-Qaeda for that matter) recognize that the road to utopia starts with instability, with destruction.  John Lennon knew it too.  In the end though, he backed away from the ideal, choosing to live in the real world.  Like him, Bernie holdouts must make a choice: the violence of the ideal or the (slightly less intense) violence of the real.  I hope the above makes clear the necessity of that choice.

Diane Davis’ Breaking Up (At) Totality, Leslie Jones & Twitter Trolls

Continuing my summer reading, I arrive at Diane Davis’ Breaking Up (at) Totality: A Rhetoric of Laughter.  Though I’ve rarely seen it cited, in my opinion, this is a key rhet-comp text.  I’d like to give a quick summary, then apply Davis’ ideas to a recent media event— Ghostbusters actor Leslie Jones being chased off Twitter by racist trolls.  I find Davis’ work highly descriptive of the current media environment.  But does it offer a prescription to help us survive said environment?

For my money, Breaking Up, with its stylistic wordplay and utter rejection of foundations, represents the zenith of postmodern rhet-comp theorizing.  Following Derrida, Avital Ronell and Victor Vitanza, Davis argues that language is suffused with what she calls “laughter,” a form of erotic energy.  Reason and logic, along with conventional discursive forms, inevitably attempt to clean up this eroticism, to pin down meaning.  This project is doomed to failure though.  The result is that subjects and social structures (which from Davis’s pomo perspective are an effect of language) remain fluid, unstable.  Davis celebrates this sense of fluidity and excess.  “To be spoken by a language contorted in laughter,” she writes, “is to be spoken by language on the loose: no/thing is excluded, censored, or negated” (95).

I’m struck by the similarities between the discursive scene Davis describes and that which I encounter everyday on social media.  What is the meme economy but the unchecked proliferation of meaning?  New forms emerge, and with them new logics, only to immediately be submerged by newer forms and logics.  Reason, as embodied in traditional philosophical discourse, has no place here.  Same with morality.  The old rules—about who can speak, what they can say and how they can say it—simply do not apply.  Instead, laughter in its most primal and yes, erotic, manifestation rules the day.

Consider the racist trolls hounding Leslie Jones.  Working under the auspices of notorious alt-righter Milo Yiannopoulus, they swamped her Twitter feed with racist insults and forged screenshots suggesting she made homophobic remarks.  Here we see language wildly out of control.  There’s no demand for “facts” or “objective” referent, no limitations on what meanings can be conjured.  Jones is a comedian and actor?  Jones is the source of AIDS?  Jones believes we need to “gas dese faggots”?  Driven by a desire for “lolz”, and unchecked by either formal restrictions (rules regarding spelling and grammar, for example) or social/technological restrictions, meanings proliferate.  Language, pulsing with vulgar, grotesque human desire, really is on the loose.  No/thing is excluded.

So, in essence, what we have on Twitter is a world where anyone can say anything, think anything—and for many subjects, especially marginalized ones like Ms. Jones, this excess is terribly traumatic.  So what should be done?  The first impulse for many, as Davis suggests, is to try and limit potential meanings, re-erect some of the barriers postmodernity has torn down.  On Twitter, this typically takes the form of appeals to authority (demands that trolls be banned, for example).  According to Davis though, all attempts to limit meaning will inevitably fail.  Indeed, there seems to be a direct relationship between censorship and erotic power—the more we attempt to restrict certain meanings (the racism of the trolls, for example), the more erotically charged those meanings become.  Simply put, the more we protest, the more lolz.

So we can’t restrict meaning.  What then?  It’s a bit tenuous, but I would suggest that Davis does offer something like a solution.  Quoting Victor Vitanza, she suggests an “antibody rhetoric” capable of “enhancing our abilities to tolerate the incommensurabilities” which make up the postmodern condition (102).  As I read it, such a rhetoric demands a rejection of foundations, a rejection of even the pretense of objective reference.  In short, it means we must come to view language—even terrible, hurtful language– as a laughing matter.

Let’s put this vision to work.  In the case of Leslie Jones versus the trolls, we have competing desires—namely, to enjoy Twitter (Jones) and to cause pain in the name of lolz (trolls).  These desires are incommensurable.  And they fuel meanings which are also incommensurable.  Going back to the erotic power of censorship, perhaps the way to drain power from the latter is through a sort of radical acceptance.  Jones must come to “tolerate the incommensurabilities,” to laugh with the (admittedly pathetic) desire of the trolls.  If she can do so, perhaps the incommensurabilities will be rendered mute.  The desire of the trolls, and the accompanying meanings, will fade.

Breaking Up (at) Totality is a radical text, as I believe my attempt to apply it to a real-life situation demonstrates.  In short, thinking along with Davis, we come to the conclusion that marginalized, maligned subjects must somehow come to believe that words simply do not matter.  This is a hard position to accept.  Indeed, word merchants of all stripes want us to believe the opposite.  In a world without rules though—which for better or worse is the world of social media—it may be our only option.

Object-Oriented Ontology: Radical, Autistic or Both?

Like many academics, I’m using the summer holiday to work through my reading list.  As such, I just finished Ian Bogost’s Alien Phenomenology, or What It’s Like to Be a Thing.  Bogost, a professor at Georgia Tech, is at the forefront of the “object-oriented” philosophic movement.  Simply put, this mode of thought seeks to displace humans from the center of the philosophic universe.  It’s interested in things—radishes, VCRs and arrowheads, for example– rather than human interpretations of things.

Now, I don’t seek to present myself as an expert on Bogost’s work.  My only exposure to this thinker is through Alien Phenomenology and his online presence (I follow his Twitter feed).  That said, from what I’ve seen, it seems that Bogost presents a rather radical, even frightening, vision of what we should be doing as teachers and scholars.  Let me explain.

According to Bogost, at the core of the object-oriented vision is the idea that “everything exists equally” (6).  Within this “flat ontology… the bubbling skin of the capsaicin pepper holds just as much interest as the culinary history of the enchilada it is destined to top” (17).  Put into practice, such a view urges philosophers to engage in deep metaphorical description of object-being, to speculate, as the book’s title indicates, on “what it’s like to be a thing.”

Admittedly, Bogost’s methodology makes for fun reading.  His descriptions of the inner lives of peppers and engine parts are indeed poetic.  It’s important to remember though that every philosophic position makes an implicit moral claim.  To do philosophy (or theory or criticism) is to venture that the world is a certain way and to suggest that others see it similarly.  Bogost seems to agree.  “Flat ontology,” he writes, “is an ideal” (19).

So what sort of action does Bogost’s ideal portend?  How does it suggest we relate to one another and the world at large?  The best metaphor to describe his position, it seems to me, is that of the autistic.  The object-oriented thinker is he or she who is able to tune out the messy, noisy world of human affairs and focus solely, engaged in rapt wonder, on the garbage truck or video game.  Bogost indicates as much, writing that “being is unconcerned with… human politics” (99).

Philosophy is just verbal gymnastics, right?  It doesn’t impact our daily lives.  No, not necessarily.  Bogost, like most philosophers, lives his creed.  Exhibit A.  The past few weeks have been trying times in the U.S.  On July 5 the ongoing genocide of black men at the hands of the state was made sickeningly apparent in a pair of internet videos.  A few days later, police were targeted for assassination on the streets of Dallas.  Twitter, understandably, was abuzz with pain and confusion as people tried to make sense of these events.  Not Bogost.  During this time, when nearly every post on my Twitter feed referenced our shared trauma, he kept tweeting about the design of the Amazon website and oddly shaped cucumbers.  He’s interested in things, remember.

As for me, I like things, but I’m first and foremost interested in attunement.  As an ideal, attunement demands openness to the subjective experience of others.  This openness is achieved though attention to the affective, the embodied.  So what ontology underlies such a vision?  Well, it’s definitely not flat.  Instead, as I see it, the field of being pulses with energy– human energy— with objects growing or shrinking in size as that energy flows through them.  Over the past few weeks objects such as “systemic racism” and “state-sanctioned violence” have come to the forefront of my existence.  They loom large, while things like video games and capsaicin peppers recede into the background.

Under the ontology I propose, being is relative— it’s based on context and positioning.  It’s determined not by things-in-themselves, but things-in-relation.  These means that to be ontologically aware, thinkers must always be looking outward and upward, at the world of objects and subjects, at things and the web of conceptualizations which bind them together.  This dual vision inevitably entails (unfortunately, perhaps) a deep concern for “human politics.”

At one point, Bogost describes his philosophy as a “new radicalism.”  I agree.  If we take his thought to its logical extent it demands an equivalence between the blood soaking through Phil Castile’s shirt and the system of human relations which drew that blood.  That’s a truly radical idea.  And one as an embodied, affectively attuned human being, I can’t agree with.