I’ve felt oddly disconnected from the resources-rich exploration of trust that’s taken place over the past weeks, and not just because I was away for half of it. In this post, I want to try to better understand my own ambivalence about the trust talk. There are lots of chunks to this that seem related to me– maybe I’ll even be able to articulate some of the connections by the end of this post.
In his #connectedcourses pre-req post on his blog, Jonathan Worth writes
Networks and societies as a whole cannot function without this omnipresent low-level trust and security. I have to trust other drivers to abide by traffic laws when I take my children to school. I have to trust that the teachers at their school will teach and care for them during the day. This enables me to go to work and specialise as a photography teacher of still other people’s kids. I trust that my employer will in turn pay me for doing so and if they don’t, then I have to trust that the law will serve to force them. This trust and predictability provides a degree of “security”. Without it I cannot drive as efficiently on the roads, my children must be home schooled and I cannot make time to specialise in teaching photography because I will be too busy farming behind secure walls. Lack of trust inhibits civic engagement.
I don’t trust that people will drive safely, I expect it. An external force has determined the ways we should all drive; our presence on the road announces that we accept these civic and social conventions– even if we don’t believe in them. This is a social contract, an agreement we have made with each other as members of the same society. So, similarly, I don’t trust that teachers will care for children during the day, I expect it. I don’t trust that my employer will pay me, I expect it. In each of these circumstances, external forces have established the standards– norms–that can be enforced by other external forces.
On the other hand, at least the way I see it, trust is mostly an individual & internal matter. Trust is mine; I give it according to my assessment of another individual’s values and actions, of their worthiness of my trust. I am under no obligation to trust anyone.
The criteria for behaving & interacting during this MOOC experience was stated explicitly at the start of the course. It was called “trust” but in that it was more of a description of expected behaviors articulated by an external source, I saw it as a kind of social contract. I didn’t write it, it was also not a course of my design, but it made sense and I decided I could agree with everything there. In other words, deciding to participate in #connectedcourses wasn’t a matter of trust, but a matter of accepting the operating procedures.
What I’m learning is that the connecting with others on a 1:1 or 1:some basis is where trust comes in– who are these people, anyway? How much do I want to reveal? How much do I want to acknowledge of what is revealed to me? These are not things that can be decided by anyone else but me.
I don’t see that I have the right to share stuff about other people; their images of themselves, their personal lives belong to them. So openness for me has never been about anything but me. In fact, I have drawn some pretty clear lines about where and with whom I share what. For example, when there was a critical family illness and I realized I could be driven insane fielding phone calls, I quickly set up a blog that I updated daily. What got posted there was deeply personal. But I don’t think it was easy to find or associate with me or others involved. Even during my first social media experiences in 1995, via the what-now-seems-primitive listservs, I was aware that I did not have the right to talk about people in identifiable ways– hmm, is IRB mentality inborn? (And yes, I do see current value in this mode of connecting.)
I currently struggle with how much of my thinking I share online. I admire people who do academic blogging. I don’t know if I have the guts to do it. That probably has more to do with the impostor syndrome the plagues doctoral students. It’s on my list of things to take up when I finish my dissertation, especially because academic publishing seems so…well, more about that later.
What I need to chew on about my own teaching is what my course design affords students (or unfairly demands of them) in terms of going public with their learning, questioning, etc. Depending on the burning questions or concerns of a class, this might be a great group inquiry.
Call me a cynic, but I’m not sure privacy exists. I have a credit card, I own a car, I have bank accounts: I can be found and I can be tracked. But when Jonathan asked us to consider how we might design privacy into our courses by having us think about a couple of hypothetical students, I had a lightbulb moment: what kind of data matters, and to whom? Who gets to decide?
Recently, I had one student who was terrified to blog. She had been trained to see social media as a place where she could do nothing but destroy her reputation. (She was only in her mid-twenties– can we please put the myth of the Digital Native to rest?) I’m not sure how to help someone understand the best spirit of the Web without having them be on the Web, so we just kept talking about it. She tried blogging and while she talked a lot about course stuff, she used the blog to work through some of her fears. In the open. As it related to her future as a teacher and a learner. I think that takes guts and a willingness to learn. She herself identified that her issues were ultimately about herself and her ideas about school, teaching, and learning. Her satisfaction was deeply satisfying to me as a teacher and remains influential for me as a learner.
Recently, I said something to a high-level computer scientist about the culture of the Web. He looked at me with a puzzled expression and said, “Don’t you mean cultures?” Now *that* was an interesting comment. I must admit that my beliefs about and hopes for the Web are range from cynical to Pollyanna-naive. I see that conventions of how to be online together that emerged from the nascent Web still permeate the Web. (Howard Rheingold is the go-to guy on this.) Pekka Himanen’s work theorizing the spirit of the hacker ethic is woven through the discussion section of my dissertation– or it will be, once I get there. I see K-12 ed-tech folks discussing these practices and other like them as part of the so-called Open Web view of digital literacy. I believe in these. It’s the Web at its finest.
But I also believe that even the finest universe has its dark, evil, I-don’t-want-to-be-there side. I hang out on Reddit a bit, but I choose my subreddits with care. (The Foucault subreddit: good stuff). Some of the demographics of the user base are interesting– this Atlantic survey might be dated, but its still informative; it informs the lens through which I read. (Like, 18-29 year-old males? Puh-leez.) I’m interested in the story of 4Chan and some of founder Poole’s claims about the significance of anonymity for freedom of speech, but definition #2 in Urban Dictionary pretty much says it all for me.
Does the worst of the Web leach into real life, particularly for women in the tech world? I believe it does. I know a young software developer in a startup who recently gathered with her female colleagues to talk with the company’s CEO about the sometimes oppressive and creepy work environment for women. That’s gutsy stuff. This kind of feminist activism is hardly ever discussed, let alone mentioned, in mainstream or other media. It certainly doesn’t come up in the relentless push for getting more women into STEM.
Kim Jaxon’s post, Trust, Koolaid, and 1,000 Papercuts, talks about the horrifying aspects of the worst of it. She describes the way thought-leaders Kathy Sierra, Anita Sarkeesian, and Julie Pagano have been harassed, threatened, and ridiculed and writes,
Most of us truly believe that you must change structures and systems (institutional, platform or otherwise) if you want to effect real change. As Rafi Santo has argued, ideologies are built in to systems and we must hack them for the better.
But how do we make systems safer while still valuing an open web? How have we ever been able to account for the worst of human nature? How do we help people grow up with the web?
I really struggle with her questions. I can get behind the point on the DML page about Santos that “it’s important for youth to see how technology embodies values, and be able to tweak (or “hack”) this technology when it doesn’t align with their values”. His specificity provides useful boundaries and criteria for tackling the ideologies embedded in specific aspects of a system that relate to youth.
I just don’t know if I believe that it’s my responsibility “to help people grow up with the web”. It *is* my responsibility to help my students, friends, colleagues contribute to the best parts of Web culture. It’s my responsibility to support, through action, people– especially women– who are damaged through the Web because of their powerful presence there and work about it and I need to learn more about how to learn how to do this. I can try to work with evils I come in contact with. But I’m not going to eradicate evil. And it’s still going to exist no matter what happens to make systems safer. I think my questions are more about where are the best places to devote my energies.
All this seems connected to me. I just don’t have an easy statement about how.