Foucault and the Discourse
Foucault focused on what he called le discours ("the discourse") -- the huge, shared discussion that takes place across an entire society, in public or in semi-privacy. Newspapers, textbooks, movies and any other public communication all come together into a larger conversation: the story of a culture.
Moreover, when he looked at that public discourse, Foucault saw a lot more structure and patterns than you might expect, and he saw a deeper level of influence on our personal beliefs than we might want to admit. He saw the discourse as a centralized mechanism that guides our thinking, by setting the terms of what we think about.
All we are...is what we say
For Foucault, that was an especially powerful idea, because he saw "language" and "thought" as the same thing. He didn't believe that there is some kind of deeper, essential "self" buried within each of us, beneath the level of our own internal monologues. To a post-modern thinker like Foucault, that self-spawning narrative in your head -- that second-to-second story you tell yourself about what you're doing, and what you're thinking -- isn't something that a mythical deeper self is "doing". That internal stream of language is your consciousness, and there's no "you" outside the words of that story.
It's easy to see, then, why Foucault's exploration of the larger, external conversation became so central to post-modernism. Our own internal stories obviously can't spawn in a vacuum, and if there's no "self" involved to tell the story, there's only one way for the process to start. In their view, each of us really only starts -- and then maintains -- our own internal narrative as a reaction to the larger discussion around us. Our conscious selves couldn't exist without it, and the content of our internal stories is almost completely defined by it.More on Foucault and post-modernism»
Foucault also saw that powerful, self-preserving forces move through the discourse. Many of those forces work to encourage consistent thoughts, cooperative behavior and, most importantly, a coherent story. "Religion", "censorship", "celebrity", "citizenship"...whether or not concepts like these directly control what we think, they definitely play a huge role in defining what we think about.
That's not to say that those forces are intrinsically bad. No society could exist without them. One of the most important functions they perform is to help filter out complexity -- to make it easier to figure out what's going on, and to decide what to do about it. That happens in two ways:
- The discourse acts as a gatekeeper, by encouraging access to some information and ideas (and making it much harder to be exposed to others).
- Within the realm of those ideas we're exposed to, the discourse supports us in acting decisively. It provides guidance for our decision-making, and models for how to follow through.
It's important to keep in mind that -- that level -- the discourse is basically operating outside any moral context. It is what allows us to act collectively, whether our actions represents the best or the worst that our culture has to offer. The challenge isn't to somehow "stop" or "fight" the discourse, but to understand it.
A New Discourse Is Writing Itself...in SQL
Until now, while the public discourse has always lived outside of any given person, it has always been defined and applied through people. Even though most of the public discussion takes place through writing, the decisions of what to say -- and what to do in response -- have all been products of human thought.What are 'collaborative filters'...and 'SQL'?»
Collaborative filtering, however, creates a narrative that is genuinely different. This new discourse doesn't need us to consciously participate in the writing process...or in how that process gets applied.
Get Ready For an "A Posteriori" World
The core issue is actually captured very clearly through a pair of ideas from classical logic: "a priori" and "a posteriori". While they might seem complicated, these two terms really just describe two different ways of reaching a conclusion:
- A priori: Starting out with a pre-existing assumptions about how something works, and then trying to validate those assumptions.
- A posteriori: Starting without any pre-conceptions, and allowing insights to emerge through empirical observation.
Our minds tend to operate from an a priori model -- especially around broader cultural topics like religion, politics, morality where there isn't a lot of empirical evidence to overrule our pre-conceptions. By extension, that's also has been the dominant mode for the cultural discourse.
Modern science has been the primary expression of an a posteriori approach. Empirical evidence comes first, and is supposed to trump personal bias. True, the classic scientific method begins with a hypothesis and then tests it, but even in that approach, the hypothesis is a means to an end. As our ability to measure the world and gain insights has grown though, a more quantitative, epidemiological approach has arisen -- one that examines vast amounts of data, identifies trends and correlations, and then tries to figure out why those patterns exist.
Historically, the a posteriori model has been more trusted, since it's perceived as more objective. On the other hand, its role in our discourse has always been weaker, because it's always been translated through an a priori filter before it is applied. We may use the data from polls to decide whether or not change a law, and new experimental findings may influence our opinion on the climate or abortion rights. What we don't have are mechanisms where the status of a new law, for example, is based directly on the outcome of an experiment -- where people aren't involved in the final decision of whether or not it's a law.
At least, not yet.
An Emergent Model of Reality
Quantitative engines, though, establish emergent, a posteriori conclusions from the data they're allowed to observe. While the larger parameters are obviously defined by humans, quantitative algorithms can -- and do -- reach almost all of their specific decisions outside of human influence. Which movies to recommend, what e-mail messages get through...it all happens in a realm of digital data. Very often, those "rules" even can't be translated into some kind of human-readable form, because they're not ideas. They're just statistical trends, buried in the data.
Even more importantly, they are empowered to act directly on those conclusions. These emergent decisions can be directly woven into the details of our lives, without any human consciously deciding how they're applied, or even being aware that they're being applied at all.
The Implications of a New Discourse
A Hammer Isn't A Weapon...Till You Smack Someone In The Head
Although many of these tools are being implemented by government and commercial institutions, the issue isn't just whether or not filters give those institutions a new, more direct opportunity to influence our experience.
They do. They definitely do. But it's just bigger than that.
What If My Doppelganger's an @ssh-le?
Imagine that you're going to a map site for driving directions, and that the specific route that's recommended isn't just based on distance and traffic conditions. Once it's able to account for the specific routes that people take, day to day, that recommendation can reflect:
- Societal Trends: If, statistically, the majority of people who have enough money to own cars and drive tend to avoid what they see as the "wrong side of town", then your recommended route is very likely to share that bias.
- Correlation Trends: One of the most important tools of quantitative engines are "look-alikes" -- the other people whose statistical profiles correlate most closely to yours. If other people who live in your suburb AND "like" organic foods...your recommended route is much more likely to take you past Whole Foods.
- Friend Trends: Finally, when a recommendation can factor in the behavior of your specific friends, then when you visit your UK clients for the first time, and you have to drive from the airport to their office, you'll see the back road all of your co-workers learned to take, to avoid the highway.
Does "Automatic" Mean "Authentic"?
In many ways, precisely because it works on an empirical model, the new discourse could be perceived as more "authentic". In many ways, collaborative tools offer the ability to short-circuit personal biases, by measuring and acting on an unvarnished record of what people really say and do.
From a "Market Economy" to a "Market Culture"
We're going to see the most fundamental effects as our social and cultural forces continue to shift to an emergent model. Human intervention obviously won't disappear completely, but more and more of our society's norms are not only going to be defined statistically, but they're going to be applied mechanically. Get ready for the speed limits in your city to be set -- and regularly updated -- algorithmically, based on traffic and accident data. Imagine a digital textbook that can prioritize or filter the information it presents, based on the behaviors and demographics of the local community, or specific personal details of the student and their family.
The technology to do these things already exists. The main factor governing how quickly and extensively they actually happen is cultural. The more that these tools become an established, accepted part of our collective discourse, the more readily they're going to be adopted.
The same discourse...that those tools are starting to write.
From Optional to Obligatory
The pace at which these tools are being integrated into our society is accelerating, but it's important to understand that the underlying reason for that acceleration isn't just that we're "getting more comfortable with them". That pace is being driven by deeper, and much more powerful forces:
- The "network effect" of desire: These tools don't just get more useful and more effective as more people interact with them. That growing impact means that their appeal, and their reach, grow exponentially as well. The advantages they offer are going to grow more and more compelling, and you'll see more and more people getting those benefits.
- A growing social obligation to participate: The fact that "more participation" = "better tools" means that everyone's benefits are highly inter-dependent. Your ability to benefit from a tool is highly dependent on whether or not I choose to participate. For something that's optional, like a social network today, the pressure to engage may be there, but it's subdued. When your friends start getting real economic benefits -- like better pricing or preferential treatment -- but only as a result of your recorded behavior, the pressure to join is going to get a lot more intense.
- A commitment to the pursuit of knowledge: Finally, the most compelling force that is going to drive engagement with these tools is because in an emergent model -- by definition -- there's no initial "idea" that people can evaluate, and decide whether or not they like it enough to participate. Participation is the only path to emergent knowledge. You collect the data first, and only then can insights emerge.
When it becomes a cultural norm to equate "participation" and "the pursuit of knowledge" (and it eventually will) -- then the decision whether or not to engage becomes a personal statement. It becomes an expression of your commitment to the pursuit of knowledge. You don't even have to be a scientist to help discover that 90% of people with a certain medical condition lived near a power line when they were kids. All you have to do is agree to participate.
On the other hand, opting out -- while it may be very reasonable, and done for all the right reasons -- will also become more and more of a clear statement that you value your personal interests more than the collective pursuit of knowledge. That choice may be completely legal, and it may be very common, but it's also going to increasingly be perceived as self-centered, and reactionary.
So, what does that mean we should do?
First of all -- just to be clear -- I'm not personally trying to advocate that perspective, or say it's all going to be fine and dandy. Honestly, I've grown up in an a priori world, and I think it's kind of creepy. I'm just saying it's pretty clearly going to happen.
Inoculating the discourse: "Personal Data = Personal Property"
The most important thing that we all can do is to help frame the debate right now, and insist on simple, powerful principles that will ensure that these tools play an acceptable role in our lives. There are obviously already many, many people focused on these principles, so this is nothing new, but to me, it boils down to a very simple idea -- that our personal data is our personal property.
That translates into two basic principles:
- Transparency: We should all have 100% visibility into the data that's actually captured about us, and how it's applied in our lives.
- Control: We must be able to add, delete or update that data. More importantly, we should each be the ones to decide who has access to it, and what we want in return for that access.
Big ideas, I know, and while they're simple to understand, they're far harder to put into practice. As I mentioned, I also know that there's a lot that's already been said about them, by some very smart people. (I'll be adding citations later today.)
The most important thing, as always, is simply for all of us to be aware -- not just that these tools are entering our world, but that they're going to influence it on a profound level. That doesn't mean that we need to be scared of them, or that we have to try and stop them. It does mean that it's very, very easy to underestimate them.
PLACEHOLDER: I'm going to add a little more detail on Foucault and the context around him. (Also, I want to give a bit of background about his personal life -- he kind of unraveled towards the end of his life, and ended up dying of AIDS. Anyone who's studied him will be aware of that.)Powered by Hackadelic Sliding Notes 1.6.5
Collaborative filters are any kind of algorithm that filter and prioritize what we see online, by taking information from other people into account. At the moment, they're most often used in e-commerce, such as when Amazon suggests that you add batteries to an order because that's what a lot of other people did, or when Netflix recommends movies that you might like, by comparing your personal ratings to other customers' scores.
At the moment, the growth of filtering technology is being fueled by its economic potential, but it's certainly not limited to e-commerce and finance. For most people, the most familiar use are the spam filters that comb through our mail -- modern spam filters like the engine that's built into Gmail don't just look at the content of a message to decide whether or not to categorize it as junk. They examine what others users have explicitly identified as "spam" in their inboxes, and use that data when they categorize the messages that come to you.
Finally, the acronym "SQL" stands for "Structured Query Language", and it allows programmers to manipulate large sets of data through commands that approximate English. Along with powerful statistical tools such as R and Hadoop, it is a central component of these algorithmic engines.Powered by Hackadelic Sliding Notes 1.6.5