Friday, November 29, 2013

#edcmooc Human 2.0? Human+=Human?

Vice Admiral Tolya Kotrolya
Well, here we are! The final(ish) week of #edcmooc.  As I wrote in my tweet earlier this week, I think #edcmooc, like #ds106, is probably more of a way of life for some than an actual course.  Sure, there won't be additional prescribed readings or viewings after this week, so the course is technically over, however the hallmark of any good course, as far as I am concerned, is one where the learners keep thinking about the material long after the course is done.  They keep engaging with the provided materials and with any new material that they encounter in critical ways.  In other words, once the course is done we don't shelve and compartmentalize the knowledge gained and leave it to gather dust like an old book on the shelf.

In any case, like previous weeks, here are some thoughts on the hangout, readings, and viewings from this week. To be honest, I don't particularly care about a certificate of completion, but I am interested in designing an artefact as a challenge to myself.  I am predominately text-based, so doing something non-text-based (or artistic) would be something that push me a bit.  That said, I am not sure if I will be able to do this in one week's time. What do others think about the artefact?  Challenge accepted?

From the Week 3 Hangout

Last week (Week 2 Hangout) the group mentioned a quote, which was repeated (and corrected) this week: "We have a moral obligation to disturb students intellectually" - Grant Wiggins.
This made an impression on me, not because of the content of the quote, but rather the succinctness of the quote.  The content is something that I, luckily, have been exposed to in my career as a graduate student.  In my first graduate course, an MBA course, Professor Joan Tonn loved to tell us "can you give me a quote?" whenever we claimed something to be true based (supposedly) on what we had read.  This was her way to not only get us to back up our claims based on the research we read, but also get us out of our comfort zone so that we could expand our knowledge and understanding.  I remember my fellow classmates and I scurrying those first few weeks in class to find the page number in our books and articles where the our supporting points were mentioned in order to be able to respond to can you give me a quote?

Another example comes from the end of my graduate studies, while I was finishing up my lat MA degree.  It seemed that Professor Donaldo Macedo's  favorite phrase was can I push you a little? In sociolinguistics he "pushed" us to think a bit more deeply about the readings, deconstruct what the readings were, and deconstruct why we held certain beliefs. This isn't always comfortable, thus the initial question can I push you a little (you could always say no, by the way).  In psycholinguistics Professor Corinne Etienne kept us on task, like Joan Tonn we also needed to quote something, and make sure that what we were quoting supported our arguments and answered the posed questions.  This means, that unlike certain politicians, we couldn't side-step the original question and answer the question that we hoped we would be asked.  These are all examples of disturbing the sleeping giant that is the mind, waking it up to do some work and expand its horizons, not just respond back with some canned, comfortable, response.

In my own teaching, I have adopted a devil's advocate persona. I try to disturb the calm waters of the course discussion forums by posing questions that may go a bit against the grain, or questions that might probe deeper into student's motivations to answer a certain way. I prefer the title of devil's advocate because unlike can I push you a little it doesn't necessarily mean that I hold the beliefs that I am asking probing questions about, but rather it means that I am interested in looking at the other sides of the matter, even if I don't personally agree. I don't participate in many faculty workshops or things like pedagogy chat time, so I often wonder how my peers do this in their courses, intellectually disturb students, in order to get them to expand their thinking.  If any teachers, instructors, or professors are reading this, feel free to share in the comments :)

Another interesting mentioned by Christine Sinclair is that it is difficult to mention contributions of 22,000 participants in a MOOC.  I have to say that it's difficult to mention all of the contributions of a regular 20 student course in one hour's time frame! In the courses that I teach I try to do a 1 hour recap podcast every week or every other week (depending on how much content is available, and how conducive the content is to an audio podcast) and I have a hard time finding the time to mention everything that was important to mention!  I can't imagine how many hours it would take to read, prepare, and produce a live hangout to get most of the contributions mentioned.  The MOOC Hangout would be like C-SPAN ;-)

Another difficult thing to figure out who wants to be mentions and who does not. This is a problem with a regular course of 20 as well. If you have a public podcast about the course, even if it's only "public" open to 20 students, some students don't want to be named because it makes them uncomfortable.  For my own course podcasts I go back and forth between mentioning names, or just mentioning ideas and topics brought up and acknowledging the contributions of students that way. The people who wrote about what I would mention in the podcast would know that it was their contribution that was mentioned, and they would (hopefully) feel some sort of sense of acknowledgement, and thus get a sense of instructor presence in the class this way.

From the Videos

In the videos section of #edcmooc this week we had Avatar Days, a film that I had seen this before. One of the things that I was reminded of was that I really liked the integration of avatars in real life environments. It is a bit weird to see a world of warcraft character on the metro, or walking down the street, but it's pretty interesting visually.



I do like playing computer games, but I am not much of an MMORPG person. I like playing video games with established characters, like Desmond (Assassin's Creed), Sam Fisher (Splinter Cell), Snake (Metal Gear), and Master Chief (Halo). I play for action, and to move the story forward.  For me these games are part puzzle,  part history, part interactive novel.  I only play one MMORPG and that is Star Trek Online. The reason I got sucked into it was because I like Star Trek, and this is a way to further explore the lore of that universe. I have 3 characters in Star Trek Online (one for each faction of the game) and while the game gives me a spot to create a background story for them, it seems like too much work. I really don't see my characters, one of whom you can see above  - Vice Admiral Tolya Kotrolya, as extensions of myself.

Watching Avatar Days again had me thinking: Are avatars an escapist mechanism? A way of getting away from the mundane and every day? Are they extensions of real life, or someone you would like to be, or whose qualities you'd like to possess? How can we, in education, tap into those desired traits that people see in their avatars to help them move forward and accomplish those things in real life?  For instance, let's say that I didn't play by myself most of the time and I were really active in my Fleets (equivalent of Guild in WoW), and I wanted to be the negotiator or ambassador to other fleets, I would guess that I would need some skills in order to be able to qualify for that position. Let's say I continue to hone my skills in that role. Now, being an ambassador in real life is pretty hard (supply/demand and political connections), but can you use those skills elsewhere?  This is an interesting topic to ponder. I wonder how others of their avatars.



True Skin (above) was also quite an interesting short film. The eye enhancements reminded me a little or Geordi La Forge, the blind engineer on Star Trek: The next generation.  These enhancements made me think a bit of corrective lenses for people with myopia or presbyopia. In a sense people who need eye glasses or contacts to see could be considered more than human if we really thought about it from an augmentation perspective. The portrayal in this video just seems foreign, and thus potentially uncomfortable, because eye augmentation to see in the dark, or have information overlay on what you see is out of the norm for us at the time being. Another interesting thing to consider are memory aids.  We use memory aids a lot in our current lives.  Our phones have phone books in them, calendars, to-do lists.  If we don't remember the film that some actress was in we look it up on IMDB.  I remember about ten or so years ago I had a friend who vehemently opposed any sort of PDA (remember those? ;-) ) because he prided himself on remembering his credit card numbers, phone numbers, important information.  Sure, some information is important to remember without needing to look it up, however when you have memory aids for potentially less important information such as who was the actor who portrayed character-x in the 1999 remake of movie y, it frees your mind to remember, and work on, other more important things.  This way you are offloading (a computer term co-opted to describe a biological process) less important information to an external device to leave the main computer (the brain) to do other things.

The virtual displays on someone's arm reminded me a lot of the biotic enhancements that can be seen in the Mass Effect series of games (speaking of Mass Effect, the background music in Robbie reminded me of Mass Effect). The thing that really struck me in this video was the quote: "no one wants to be entirely organic."  This is an interesting sentiment, but what happens to the non-organic components when the organic dies? Supposedly the non-organic components cannot function on their own, so where does the id reside, and is it transferable to a data backup, to be downloaded to another body upon the organic components inevitable death?   The last question about this video is: when will it become a series on TV? ;-)

A quick comment on the gumdrop video: I loved it!.  It reminded me of a series on the BBC called creature comforts (sample video. In this series they recorded humans and they matched them with their animal personals (I guess), so the claymation animal was saying what the human had spoken.  Gumdrop could very well be a human voiced by a robot.

Finally, a quick note about the Robbie video. This video was a bit rough watch. The first thing that surprised me was that the space station was still in orbit after all those years. I would have assumed that eventual drift would cause it to come into Earth's gravity and cause it to crash. While watching the video I was wondering when this was taking place. I kept thinking "how old would I be in 2032?" and I made the calculation.  Then "How old would I be in 2045?" and I made the calculation, and then Robbie mentions that he (she? it?) has been waiting for 4000 years. At that point I stopped counting knowing that I would be long dead when Robbie's batteries died. When the robot mentioned that he lost contact with earth the first thing that came to mind was a scene from planet of the apes; specifically the end where the main character says "Oh my God. You finally really did it, you maniacs, you blew it up." I am not sure what that says about me, but I would surely hope that they weren't responding to this robot because things went sidewides on the surface of the planet.

From the Readings

Finally there were some interesting things that I wanted to point out from the various articles that we had for this final week. In Transhuman Declaration there was this belief or stance (italics my own):
Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
The thing that stood out to me was the invocation of morality.  I haven't really thought about the nature of morality in quite some time - or rather I haven't had to debate it; That said I am curious as to whether or not morality and moral behavior is a standard or expected standard amongst human beings, or whether it falls under the category of "common sense," which as we know common sense isn't all that common, but rather it's something that's made up of the cultural and lived experiences of the person who holds these things as common sense.  Is morality something that is malleable? Or is it a constant?  If it's malleable, what does that say about the expectation to act morally? If you harm or injure someone or something while trying to act morally does that negate or minimize the fact that you have actually harmed them or stepped all over on their rights?

The final article, for me anyway, was Humanism & Post-humanism here is something that got the mental gears working:
In addition to human exceptionalism, humanism entails other values and assumptions. First, humanism privileges authenticity above all else. There is a "true inner you" that ought to be allowed free expression. Second, humanism privileges ideals of wholeness and unity: the "self" is undivided, consistent with itself, an organic whole that ought not be fractured. Related to the first two is the third value, that of immediacy: mediation in any form--in representation, in communication, etc.--makes authenticity and wholeness more difficult to maintain. Copies are bad, originals are good. This leads some humanisms to commit what philosophers call the "naturalistic fallacy"--i.e., to mistakenly assume that what is "natural" (whole, authentic, unmediated, original) is preferable to what is "artificial" (partial, mediated, derivative, etc.).
What really got me about this is that in humanism there seems to be no space for the fact that while we do process things differently, we aren't really 100% unique as individuals.  The old adage of standing on the shoulders of giants goes beyond academic writing.  We are comprised of the sum (or more than the sum sometimes) of our experiences, which encompasses human relations, education, personal experiences, environmental factors and many many more things.  We can be clever, ingenious, and visionaries, but we weren't born with all of what we need, we acquired it along the way and it shaped us into who we are.  We can be authentic, but we can't be authentic without other people around. Others both shape us and allow us to show our individuality and elements of authenticity. Thus, while we may not copy verbatim, we do copy in some way, shape, or form while we remix that into something that makes it "new" and not a copy of something.

Furthermore, this whole notions of wholeness is a bit where I saw Carr's Is Google Making us Stupid? article. It seems that one of the laments (I won't go into everything in this article) is that people seem to skim these days, that they don't engage deeply because the medium of the web has trained us (or derailed us as the reading might imply) because there are way to many flashy things on the screen that vie for our attention.  I completely disagree.  Even when people had just plain-old, non-hypertext, books, things keep vying for our attention. If we are not interested in what we are reading it is more than easy to pick up that comic book, listen and sing-along to that song on the radio (or the MP3 player), or to call your friends and see they want to hang out.  Even when you're engrossed in the reading, in traditional, non-hypertext, materials if there are footnotes or endnotes that give you a lot of supplemental information, they take you out of the flow of your reading.  Deep reading isn't an issue that is technology related, bur rather a more complicated (in my opinion) endeavor which has to do with reader motivation, text relevance to the reader, text formatting and type-setting (ease of reading) and setup of the mechanics and grammar of the text, i.e. the more clunky or "rocky" the text, the more inclined the reader will be to skim or just avoid the text.  There are more critiques that I have of the Atlantic Article by Carr, but I'll limit it to this one.  Now, back to  Humanism & Post-humanism. Another interesting quote (italics my own) is as follows
most of the common critiques of technology are basically humanist ones. For example, concerns about alienation are humanist ones; posthumanism doesn't find alienation problematic, so critical posthumanisms aren't worried by, for example, the shift from IRL (in-real-life) communication to computer-assisted communication...at least they're not bothered by its potential alienation. Critical posthumanisms don't uniformly or immediatly endorse all technology as such; rather, it might critique technology on different bases--e.g., a critical posthumanist might argue that technological advancement is problematic if it is possible only through exploitative labor practices, environmental damage, etc.
This is a pretty interesting though, that most common critiques of technology are humanist ones.  It reminds me a lot of my Change Management course when I was an MBA student and the children's book who moved my cheese. Well, I saw it as a children's book, but it may not be. It's probably a tale that can be dissected and critiqued quite a lot from a variety of stances. The thing that stood out for me is the worry that technology has the potential to alienate by not having people communicate with one another in established ways, but what about people who are already not communicating well with established ways, but can use ICT to help assist with communication.  The usual example of this is are students in classes that are generally more timid or laid back.  In a face to face classroom, which has space and time limits imposed by its very nature, the students who are more outgoing and outspoken might monopolize the course time. This won't give learners who are not as outspoken an opportunity to chime in, or share their point of view, or understanding once they have processed the readings for the course, things that could move the conversation and learning forward in interesting and unforeseen ways. 

In an online course or a blended course, however, learners have more affordances that are not there in a strictly face to face course.  They have time to chime in, and thus the conversation can go on longer and thus more things can be teased out of a discussion topic.  Furthermore, students who aren't as outgoing in the face to face classroom have an opportunity to take the microphone (so to speak) and share with others what their thoughts are on the subject matter that is being discussed.  Instead of vying for that limited air time that you have in a face to face classroom, ICT has the potential to democratize the discussion that happens in the classroom by providing opportunities for all to contribute.  Technology by itself wont' be the panacea that makes this happen, let's not kid ourselves; there are many underlying foundations that need to be in place for students to use the affordances of ICT effectively.  That said, this is a case where ICT has the potential to bring together, not alienate fellow interlocutors and travelers on the path of learning.

So, what are you thoughts? :) How does Human 2.0 sound?

Saturday, November 23, 2013

#edcmooc - almost human

Man, it's been a crazy week.  I've been jotting down notes for this post from the various viewings, readings, fellow blogger posts, and discussion forums.  This was meant to be several posts over the week, but it all wrapped up into one big thing. Oh well.  Such is life ;-) This week I'm creating some category headers to make things easier to read.

From the week 2 synchronous session

The synchronous Google+ session was pretty interesting.  From it came a few interesting points to ponder.  One of the participants of EDCMOOC brought up a question on whether or not it is necessary for everything to be a game?  The question was probably geared toward questioning gamification and the implication that learning shouldn't need to entice learners to partake in and engage with learning.  I personally disagree.  Everyone finds a reason to participate, or not participate in a learning venture.  For some people learning is a thrill ride, so even when they are down in the dumps and struggling with material they are having fun.  For others, when they are struggling it may feel like they are publicly flogged - not a nice feeling to have.  By incorporating game mechanics into learning you aren't just trying to make something more enjoyable, but rather I would argue you are trying to provide additional appropriate supports, like learning scaffolds, and appropriate rewards for meeting certain crucial checkpoints. Focusing only on the badge or the fun aspect really does gamification a disservice. A related comment to this had to do with the notion that if we view education as a game, then we will find way to try to "beat" it or maybe "cheat" our way through.  Personally students think we currently do that and we're not treating education in a game-like manner.  Students always haggle for one more point on that exam, or ways of getting extra credit, or they figure out the professor's preferences and just regurgitate what they think the professor wants to hear. In these cases there may not be any actual learning happening, but rather a way to "cheat" the system. We, as humans, are problem solving animals.  Gamification or not, we'll try to beat the system.

A follow up comment comment was questioning the necessity of viewing everything through a competitive lens. The implication is that learners work in a solitary manner, in a zero-sum environment, so my win means your loss.  So, as the commenter asked, why not work together? Education doesn't have to be zero-sum.  We can, and in fact do, work together. But, it all comes down to each person's individual goals and motivations for being part of an educational venture. How you as a learner traverse the path of the course from start to end will depend on a lot of things, including your own motivations.

Finally, there was an interesting discussion on essays which came about from the readings on automated or peer grading of essays. There were two distinct points that came out:

  • Why write it if no one else will read it an interact with it (Jen?)
  • Isn't one of the points of essays to have that conversation with yourself? Taking a conversation and internalizing your conversational partner (Hamish) 

For what it's worth, I think that writing, as a process, is both an internal and external motivator.  While I think many of us would like to engage with others through our writing, it's not the only motivator for our writing.  There are many blogs out there with few readers, this blog included. I don't write necessarily to have people read my posts and thus engage intellectually with me, but it's a way of discussing various views with yourself and by doing it openly you have the opportunity to both involve others in this process, or share your understanding with others. In one case (discussion with one's self) you are pushing yourself to become more knowledgeable, while in the other case (discussion/engagement with others) you are potentially engaging in a vygotskian dialogue where you are the More Knowledgeable Other in some cases, and your peers in others; thus through common dialogue expanding the overall knowledge and (hopefully) understanding in the network. When it comes to automated essay scoring, I already mentioned that the emphasis is placed on the grading, and not the process in a previous post.  I would also add, based on this hangout recording, that I think that essay scoring is potentially detrimental to the discussion with one's own self because you are not writing to engage with yourself or others but rather you are writing to beat the automated machine algorithm that scores your paper.

From the Week 3 viewings

There were a few interesting videos this week to poke around the old "meat brain" and make it do some work. The They are Made of Meat is pretty hilarious, and the World Builder video was quite touching. It reminded me of the Animus is Assassin's Creed and moving around within the animus, both the reconstructed environments and the "getting your bearings" environments (game play introduction) that look more blocky. Specifically the World Builder made me think a bit of what it means to be disabled, and if our bodies are incapacitated but the spirit (the ghost in the shell, if you will) is there and ready to participate, what does that mean about the human ways in which we can interact?  If there is a separate, but connected, matrix-like reality that connects the minds of people in comma conditions, what that do to our definition of human communication>

This brings us to the difficulty of defining what is being human and Fuller in defining humanity.  In thinking of this TEDx video it reminded me of some really interesting characters that I've come across over the years in Science Fiction shows that are either androids, cyborgs, or some  other type of artificial intelligence.

Lt. Commander Data, Star Trek
One of the first characters that comes to mind  is that of Lt. Commander Data on Star Trek: The Next Generation. Throughout the series, and the subsequent movies, the audience (and the cast of the show) explore what it means to be an individual and to be human.  Data refers to his maker (Dr. Soong) as his father, and Dr. Soong's significant other as his mother.  Data keeps on his pursuit  of becoming more human by experimenting with art, having a pet (spot the cat), having romantic relations with a crew member, and in the series also creating an android of his own as a way of procreating. When Starfleet engineering wanted to take him apart to learn what makes him tick (and risk damaging him) the whole issue of what it means to be an individual came up again.  Finally, at the end of the last TNG movie, data sacrifices himself to save his friends, and the crew.  As a fan this annoyed me, but in the comic leading up to the 2009 Star Trek film we see Data back from the dead, by transplanting a copy of his memories and experiences into beta, a more primitive android, of the same make. Eventually the primitive positronic brain of beta adapts to accomodate all of data's personality.  So, what does that mean in terms of who is human? How does it connect to World Builder?

Finally, with Data, what I found interesting was that a Vulcan (arguably another human) told data in one of the episodes that Vulcans strive all their life to reach what Data has had since birth, but on the other hand Data would gladly give it up to be more human.


Rommie, Andromeda
Next we have another Roddeberry creation: Andromeda. In Andromeda the spaceships have avatars that help the ship AI communicate with the crew and captain.  The relationship between ship's Avatars and their Captains seems to also take more intimate dimensions from time to time since it appears that these Artificial Intelligences aren't cold calculating war machines, but rather symbiotic beings that are capable of fondness, caring or even love.  As Data, from Star Trek, would say "As I experience certain sensory input patterns, my mental pathways become accustomed to them. The inputs eventually are anticipated and even missed when absent."  The interesting thing about Andromeda (or "Rommie") is that she exists in, at least, three places simultaneously - an android body, a holographic projection, and on a computer screen.  There have been many times where all three personifications of Andromeda have conversations to work out problems and figure out courses of action.  This reminds me a lot of what Hamish mentioned in the Hangout about writing in order to have a conversation with one's self, and to be able to work out issues.  I am currently re-watching Andromeda, part way through season two, since I missed a lot of it when it was originally on television.  Who knows what else comes up in terms of being human as the series progresses.

Cameron, Terminator

The next person that comes up is Cameron from Terminator: The Sarah Connor Chronicles. In the movies the terminators are portrayed as cold heartless machines that do what they are programmed to do.  This usually involves lots of killing of humans. However, as John Connor (from the future) captures and re-programs one of them (Arnold Schwarzenegger) that cyborg is then turned to protecting his younger self in the past.  The television series actually took a different approach from the movies.  Not all of the terminators were seen as one minded assassins sent to the past.  Cameron, the protector cyborg, shows us a glimpse of what might be happening in that metal head of hers.  She doesn't just adapt to fit in by using new slang, stances and acting "more normal" to fit in.  It seems to me that the writers of the show tried to show us her interest in the arts when she was practicing ballet on her own in one of the episodes; even after the mission which involved knowledge and skills of ballet was over.  She kept on experimenting with it.  She also showed an emotional component, a connection to John's younger self.  This may have been a preservation mechanism, we don't know. The series was killed off after two seasons, but it would be interesting to explore more what constitutes human and what sort of human elements can these killer cyborgs show.  Another interesting character is the T-1001 terminator, Catherine Weaver, and her "son" John Henry.  Too much to go into at this moment though. This was a good series ;-)
Dorian, Almost Human

Finally, I was thinking of the Cylongs in Battlestar Galactica , both "Chromejobs" (robots) and "skinjobs" (humanoids, made of flesh and blood) and the duality of what it means to be non-human.  Both chromejobs and skinjobs were cylon, the "enemy" of the "ragtag crew" of Adam. The chromejobs came before the skinjobs but they were seen as equal...or where they?  I don't remember a lot from the series since it's been a while since I've seen it.  So, I will focus instead on my last case: Dorian, the android from the new series Almost Human. At this point there have only been two episodes, so it's not that easy to discuss a lot about these characters, but the portrayal of androids in this series, and Dorian in particular is pretty interesting.

Dorian was decommissioned and replaced with more sterile, tactical, police androids because the DRN (Dorian) model emotion engine made them "unpredictable." I guess by "unpredicatable" the script writers meant to imply that they were quite human and acting "illogically" as a Vulcan would put it. Seems to me that in high risk situations where police need a police android partner that unpredictability would be a benefit, not a hinderance. From the two episodes we've seen Dorian in, it would appear that there are feelings there toward humans they care for, but also for fellow androids.  When an android was put down (through no fault of her own it should be added), Dorian stayed in the lab to ease her passing in a way. Is this a human trait?  Do humans actually do this? I would argue that this, empathy, isn't a universal human trait.  I'm quite curious to see what the writers have for us as the season continues.
Finally, back to the TEDx talk, it was interesting to think about the "elevation" or raising of all humans to a certain level coming in with Christianity.  A concern about the poor that wasn't there before. Since I don't have much background in this arena I'll go with it, but I am thinking about the current rhetoric about "democratizing" education with MOOCs by serving underrepresented student populations or the less advantaged in developing nations.

From the week 3 Readings

Finally there were some interesting articles this week, although I must admit that I didn't find them as interesting as the past couple of weeks.  There were, however some intersting points made in the Human Element on InsideHigherEd.com. One of them keeps coming up over and over again in one of the courses I teach. This point is as follows (quotes from IHE)
But Hersh believes there is another major factor driving the gap between retention rates in face-to-face programs and those in the rapidly growing world of distance education: the lack of a human touch.
One of the misconceptions that students have coming into their first online course is that they expect that online courses will be a straight replication of the processes and procedures that exist in face-to-face courses.  This, of course, is not possible, and having such an expectation will lead to an inevitable sense of disappointment.  Recording a play and posting it on YouTube for viewing will not give you the same feeling and engagement you have when you go to the theater. The audience will engage differently in the theater than they will on YouTube. Thus, if you are presenting and attempting to engage in a new medium using another medium's rules and expectations for action and reaction you will not be very successful at your end goals. Distance education doesn't have a lack of human touch, it's just a different human touch than people are expecting.

The other thing from this article that I guess I don't understand what his "Human Presence Learning Environment," based on the Moodle LMS, has that is so different from other learning management systems. Just incorporating video doesn't seem like such a great leap forward and these days you can do this with many external providers, including Google+. It seems to me that they just wanted to have something new in terms of a name or an acronym to get their 15 minutes of fame.  A fellow academic, a couple of decades older than I, told me recently that he thought that new acronyms that viewed something existing from a slightly different angle were silly to him when he was first starting out, but he found out that this was the way to get funding for research.  The review, validation and reframing of the existing just isn't sexy enough to get you attention.  Too bad for our profession.

Finally, I think in MOOCs the "human connection," whatever that might be, can help make MOOC "completion" rates higher, however you define completion rates (I personally am still a skeptic on this front). But I am wondering how one can increase communication when you are in the virtual equivalent of a stadium with many unknown peers, and facilitators moving around in the crowd with them in their bright yellow shirts handing out participation stickers and handing the microphone to someone with a bright idea.  There is an idea that has been brewing in this arena since last spring.  More on this as I hash it out.

Next was the article Human Touch on EducationNext.org. This article reminded me of a classmate I had once who had two kids who he never allowed to use a computer, watch TV or play video games. I think that this quote from the article perfectly summarized his position:
A computer can inundate a child with mountains of information. However, all of this learning takes place the same way: through abstract symbols, decontextualized and cast on a two-dimensional screen. Contrast that with the way children come to know a tree–by peeling its bark, climbing its branches, sitting under its shade, jumping into its piled-up leaves. Just as important, these firsthand experiences are enveloped by feelings and associations–muscles being used, sun warming the skin, blossoms scenting the air. The computer cannot even approximate any of this.
Having grown up in what is really a village, with lots of land around me, and dirt, and some farm animals, I think that this is an important part of growing up: the great outdoors, the fresh smell, the dirt (and subsequent washing up), however I wouldn't exchange computing for this, nor this for computing.  I think that these days there are ways of thinking that need to be nurtured, not just treating a computer like a tool to be learned in your final year.  I think there are measures of creativity that can be accomplished with games and computing that cannot be accomplished in real life.  There is also a lot of real life that cannot be accomplished virtually. This is something I saw as a computer trainer in a previous job.  Many students had learned the tool mechanically, so when an update came, and things moved around, there was a difficulty in being critical and finding the right information in that giant mountain of information.  The skill that they have picked up is using a pre-determined critical path, not finding their own critical path from a mountain of information.  This, to me, is much more important than having someone learn how to use a computer as a tool in their final year.
Of course, computers can simulate experience. However, one of the byproducts of these simulations is the replacement of values inherent in real experience with a different set of abstract values that are compatible with the technological ideology. For example, “Oregon Trail,” a computer game that helps children simulate the exploration of the American frontier, teaches students that the pioneers’ success in crossing the Great Plains depended most decisively on managing their resources. This is the message implicit in the game’s structure, which asks students, in order to survive, to make a series of rational, calculated decisions based on precise measurements of their resources. In other words, good pioneers were good accountants.
I remember playing the Oregon Trail when I was in high school on an Apple IIgs.  Of course, by that time it was old, but I didn't care because I had not grown up with this technology.  I approached the game as a game, not as a way to learn history.  I think that games will always fall short on the goals that we want for them to reach.  There is just no way, with today's technology, to reach the levels of sophistication that are portrayed in Star Trek's Holodeck.  I think games are a good start to get a hook into student learning. We can then take that interest and expand upon it with additional information that would benefit them in the long run.  You could even tie-in the great outdoors in this!  Give the learners the materials that frontier settlers had, and tell them that they need to solve a problem with these seemingly unconnected materials.  Make them junior McGyevers and help them learn a lot of different skills, not just names and dates, and resource management.

Finally, a funny xkcd comic shared by a fellow participant in week 2.

Simple Answers to Technology (xkcd.com)

What do you all think of these things?

Wednesday, November 20, 2013

SPOCs are illogical

Angry Spock (Star Trek reboot)
OK, OK... the title was easy pickings but this article is quite serious.  I've chosen to ignore, for the most part, the whole idiocy of the term SPOC (small private online course).  SPOCs are really just "regular" online courses, as I've written in my one other post about SPOCs. It bothers me that there is so much revisionist history around the topic of "traditional" online education with articles such as these where organizations like Colorado State University claim to be "pioneers" in SPOCs since they've been doing online education for the past five years.  A whole five years? Our fully online Applied Linguistics MA has been around for eight years, and our overall organization, UMassOnline, has been around for about ten years doing "SPOCs." Maybe we are pioneers too, who knows, but it's really difficult to critically discuss MOOCs, traditional online education and flipped classrooms when people muddle the water with SPOCs, another useless acronym that overlaps with currently existing terms.

So, I was pretty content to just ignore SPOCs, but this blog post came across my twitter feed (I think courtesy of EDCMOOC) that I couldn't ignore from a philosophical perspective. Well, it was this article, and the mentioning of the term from a colleague of mine which made me almost gag that really was the impetus for this post. So, in this article SPOC has been succinctly defined as:

The term “SPOC” has been used to describe both experimental MOOC courses where enrollment was limited as well as packaging options whereby academic institutions can license MOOC content for professors to use as components in their traditional courses.
This is a good place to point out that an "experimental MOOC" is redundant.  ALL MOOCs at this point are experimental.  We haven't cracked this nut, so we're experimenting with large scale online courses and various evaluation mechanisms in an environment where we're not worried about accreditation and academic honesty as much.  Sure, we pay lip service to academic honesty by clicking the little "I am in compliance with the honor code" button, but at the end of the day no one is risking their reputation, as far as academic honesty, retention and measurable outcomes, goes.

Beyond the whole experimental thing, I should point out, and will go into a little more elaboration later on in this post, that MOOCs and licensing are antithetical to one another.  Part of MOOC is Open.  We can argue all day about what "Open" really means, but at the end of the day the Open in MOOC was intended to be Free to use, Free to Remix,  Free Repurpose, Free to Feed Forward. But for now, let's focus on the limited enrollment:
One of the most successful limited-enrollment MOOC/SPOC classes was CopyrightX from Harvard that only allowed 500 who made it through an application process to enroll. The course was still free, but students who took part were expected to be full participants (not auditors or dabblers), and the combination of limited enrollment and a decent-sized teaching staff meant that students could be given research and writing assignments that would be graded by teachers vs. peers.
Last summer I was having a chat with a respected friend and colleague, over beers, after the end of Campus Technology 2013. My colleague works for an entity that deals in MOOCs, and the organization does cap courses for one reason or another.  When I discovered this, I shot off the first volley and proclaimed that those courses weren't MOOCs if they prevented more than X-amount of people to sign-up. An interesting discussion ensued whereby I was able to work out and better articulate (and understand) my own positions on MOOCs and course caps.  At the end of a very interesting discussion this is what I came up with:  It's perfectly fine to have an enrollment cap in a MOOC if it's about one of two things: (1) You are either unsure of the various technology pieces and thus you can to hold some variables constant while you stress test your system.  After all, you don't want a repeat of the Georgia Tech MOOC #Fail. And, (2) the other acceptable, for me anyway, reason to cap the course is to experiment with some sort of new pedagogy, design or technique and you want to make sure that you aren't juggling too many things; thus having fewer students is preferable to research purposes.

That said, even with lower course caps, this doesn't make it any less of a MOOC.  After all, as I have argued in previous posts, Massive is relative. Some courses will garner 100,000 students because the barriers to entry are lower, and others will only get 100 because the barrier to entry, such as previous knowledge that is discipline specific, is pretty high.  Further more, the CopyrightX course isn't really a MOOC, in my book.  Not because of course cap, but the way they approached the course.  They expected each and every student to participate based on their own rules, and they treated the course like a web-version of a large, auditorium delivered, course. This came part and parcel with the assistants that they had to help out in the course. This wasn't a MOOC. Perhaps it was more along the lines of a traditional online course, but calling it anything other than a free traditional online course is disingenuous and shows that there is no understanding of past precedents.  Next we have the who sticky issue of licensing.
 The licensing of edX content to San Francisco State College that caused such a ruckus earlier this year represents the other phenomenon being commonly referred to under the name SPOC.  In that case, the same material you or I would see if we enrolled in a MOOC class (such as the lecture videos, quizzes and assignments associated with Michael Sandel’s HarvardX Justice course) would be given to professors who would be free to pick it apart and put it back together in order to customize their own classes in a way that represented their preferred combination of their own teaching resources and third-party materials.
I have two problems with the notion of licensing of MOOC content.  Both of my issues are philosophical. First, as I said above, we've established that MOOCs, have an Open component for use, reuse, remix, and redistribute.  This also happens to be in the tenets of Open Educational Resources.  Sure, with OER you are technically providing materials under an open license, but the language used in the discussion over licensing of MOOC content is really much more commercial in nature.  It's seen as a way of making money for the venture capital funded MOOC LMS platforms like coursera and udacity.  In addition to the philosophical issues of what constitutes open, I have an issue with the crazy amounts of money pumped into VC funded ventures, which inevitably might likely raise tuition and fees for students who are paying to get their accredited degrees. So, in addition to signing contracts with these companies, and giving them the right to redistribute the content, and handing over a considerable chunks of change to design or run these courses, we have content locked up in a closed system. This is a far cry from the Open we envisioned before EdX, Coursera and Udacity came onto the scene.

This reminds me of parallels in the academic journal publishing industry.  Authors do the work for free.  For the most part editors also do the work free.  Journals however cost, and they cost our libraries a pretty penny to have access to journals that those same authors (and their students) are members of.  If you are designing MOOC content with the intent of making a profit from it by reselling it to classroom flippers, then you're not making a MOOC. You're just developing content, like people have done in the past. If MOOC content is available freely for use in other courses, small, large, campus, online, flipped, blended or whatever - you don't need to call it by a new name.  Just use the OER like we've used it before.

Your thoughts on the matter?

Saturday, November 16, 2013

#edcmooc - Where do you want to go today? Build that bridge to your utopia

So, we are at the end of Week 2 of #edcmooc and we are wrapping up the unit on Utopias and Dystopias, and everything in between (because thing is really that black and white). As with the week before there were some videos to watch and think about. I think that the no-lecture-videos format works well.  I like to see what people do with certain conversation starters and where they go with them. As I said last week, even though this course is run through coursera it's very much a cMOOC format to me.

One of the videos presented was the video bellow on bridging the future.  Honestly this video seemed really cool, and a nice proof of concept of what could potentially be done with technology. Students, in this case, seem to be using junior versions of tools, like CAD, that professionals use to do their work. This seems both useful to learn concepts, but useful to also begin learning the tools that are used in real life for these types of tasks.  The one concerning thing that I saw was the lack of books.  Don't get me wrong, I do my fair share of reading on eBooks, but those tend to be non-academic.  If I need to have several books open at the same time an eBook just doesn't cut it.  I don't have the money for five iPads to do what people did in Star Trek with PADDs. I am also wondering what the cost of these things are.  I know that the overall cost tends to go down over time, but I also considering the cost of not equipping classrooms everywhere with this, thus expanding the gap between the haves and have-nots. While this future is cool, it's no utopia, and it's no dystopia.  As I said before, one man's utopia is another's dystopia. What's important is what can we do with this setup that our current setup does not allow?

 


As a side note to the video discussion the video "A digital tomorrow" (see bellow) was pretty funny.  It may seem dystopic at first but I think that it's probably indicative of what the future may look like.  There will be some pretty interesting technology, but it won't work as well as the advertisements say it does, or as people imagine the future to be: flawless and everything works.  The visuals also reminded me of the jPod TV Series.

 

On the article front, the articles were pretty interesting reads, but I'll only focus on two articles: Metaphors of the Internet and the article on Peer Reviews vs. Automated Scoring.

The metaphoers articles brought me back to my days as a linguistics student (a few years back) with the mentioning of emic and etic perspectives.  It also reminded me of schema activation from my same applied linguistics work.  It was pretty funny to me how Rheinghold is painted as an internet critic and critic of "other forms of electronic communication [who] often cite[s] commodification as a problematic, destructive force on the Internet," especially since it was written in 2009 and by the Rheinhold seemed to have become an internet "convert" and advocating the harnessing of the internet and the social element in it to amplify our collective intelligence.  Is this just an honest oversight of the author? Or a case of selective bibliography or interpretation?

Metaphors are pretty good at getting people started with understanding a new thing.  They activate schemas in our existing knowledge that help connect what we are learning to what we already know. They are, however, only a beginning. Our understanding of the new should go beyond the connecting with the old. We should understand the nuanced differences of the old the new.  This article reminded me that when the internet was young, and I was starting to learn about it, I didn't have any metaphors for it.  Computing was also new to me,  my English was improving since Greek is theoretically my native language, and existing metaphors like "highway" really meant nothing because my notion of "high way" (Εθνική Οδός) was essentially no different than a long stretch of 2.5 lanes.  I guess my notion of the internet was a place to find things. Maybe the best metaphor I could come up with is the notion of the bazaar.

In the other article, one of the things that really came to mind was that there was way too much emphasis placed on the grading aspect of the essays (raw score) and not enough emphasis on the commentary aspect of essays.  When someone grades a paper, or any assignment that is something other than formulas, there are two aspects: the raw score from a rubric and comments on the essay. Even if someone gets a perfect score on their essay, that doesn't mean that they've reached the apex of their performance,  They can still do better and improve, and this is where instructor comments come in.  You can get 100% on an assignment, and at the same time you can improve your work. You do this by reading the comments from your instructor (or more knowledgeable other) and you apply those to your day to day work.  Mechanized, or peer grading, can give you the same raw scores for some very basic essays, but the commentary for improvement won't be there, not to mention that when essays stray from a prescribed format they will be graded wrong even if they are not.

Finally, in the forums there was also a lot of great activity. I went in an up-voted a few things that stood out to me, but it would take more than one, two, or three blogposts to discuss all the interesting sparks of the imagination in the forums.  For the time being I picked one thread that ties into my others MOOC thoughts.  This thread was: "Would you pay for a MOOC?" The question was:
Would you pay for a degree taught in MOOCs? More importantly, and a topic in and of itself, would businesses and industries hire people who have learned in this type of environment?
Jen Ross asked in this discussion:
Great post, Alan - maybe the question isn't so much 'would you' pay, but 'how much' is a MOOC worth? What is it that we pay for when we pay for education? 
I honestly think that the way things are today what we pay for is Accreditation.  Of course that presupposes a valid pedagogical model and faculty contact time, however one may measure that. This, in the US, seems to mean measurement of "butts in seats" time in many instances. So having a subject expert teach for a certain amount of time, and then passing some sort of summative examination ties together to give us accreditation. This may seem like a really bleak view of education, but with many people going to school for employment purposes, that seems to me to be the main impetus for payment of educational services.

I personally wouldn't pay anything for a MOOC. A MOOC is open, and thus, for me, free. The certificate of completion that coursera, udacity and EdX hand out at the end does not mean much in the real world at the moment. However it is a nice momento of my time in the MOOC! Like others said in the discussion thread, I would probably donate the cost of a cup (or two) of starbucks coffee toward the MOOC if it help support the infrastructure to make the collaboration possible. But, paying as a pre-requisite to participate - no.  Hamish Macleod pointed out that he contributes to wikipedia every now and again because it is a valuable tool for his job. I think that this is an apt analogy for MOOCs.  Furthermore, I do think of open in MOOC as free.  Content usually isn't open as in OER open, so open must be free.  Otherwise, what could open mean?  Open to enrollment?  So are collect courses, and have been for quite some time. So what? :)

I also liked this quote from Roberta Dunham
MOOCs are great ways to share learning without having to deal with the organized higher education syndicate.
I think Roberta hits on an important point, and one of the intents of the original cMOOCs. I think we've come full circle, and if we haven't yet, we may be pretty close.  Keep thinking freely :)

Last thought (more of a don't let me forget type of thing), the issue of accessibility came up this week in #edcmooc; accessibility of two types.  On the forums accessibility was discussed from a health standpoint with people with disabilities and access to MOOCs; and on twitter the issue of the digital divide (and I would add to that computer, information, and critical literacies) and access to MOOCs.  This is a major topic in my mind - but subject to a future post :)

Friday, November 15, 2013

Video Games and Learning MOOC - process throughts

Over the past few weeks I've been dabbling with a course on coursera designed by two professors from UW Wisconsin.  I didn't realize who they were (Squire and Steinkueler) initially, but at the "course" progressed I realized that I had read some of their work before when I was reading about video games and learning.  An added benefit was that there were some guest appearances by Jim Gee, someone who is mentioned quite often in the department I work in (Applied Linguistics at UMass Boston) and whose books on video game learning I've enjoyed in the past.

Since there really wasn't a lot for me to react to while the MOOC was in session I decided to hold off and do one summative post at the end of the MOOC, which just so happens to be this week.

So, the first thing that struck was this insistence that the "M" in MOOC stands for "Massively."  This is just wrong. It's a massive online open course, not a Massively open online course. I know it's a small picky thing, but they do mean different things. Massive and Open are two different words.  So Massive = many people "enroll, and it's Open (whatever value you may ascribe to Open).  Massively Open means something else, like "hey, look at that gaping sinkhole! It's massively open wide! I guess the course could still be Massively open, as in there is no copyright, everything is downloadable, remixable and redistributable, and it's all free. No questions about it. But, this is not the value ascribed to the "mass" part, so let's just get it right - Massive, not Massively.

Now that I've ranted a bit, what was my motivation to join? I was interested in the topic, and I saw Jim Gee's name, someone I've heard about for a few years now as part of my own department's work, as I previously mentioned. I also think that it was motivating not to have to deal with "certificate of completion," tasks like silly little comprehension quizzes and having to work toward contributing to a research project. From the weekly introductions to the course I guess that other participants felt that they were guinea pigs for the instructor's research and they wanted other more "substantive" evaluations of their work.  Personally, I think that contributing to research was much more interesting, and by completing these little assignments it allows participants, who were new to this, to see what is entailed in the process of figuring out what people learn from video games, and how conditions and results of learning are tested in these environments.

One of the data collecting assignments were the Week 1 discussions.  I think that they were really good in that they allowed learners to post something other than text. This was somewhat open ended (with exception of time limit), it allowed people to experiment and post something that discussed issues of week one, but it didn't seem stilted. Even though something different than text was posted, it still leads me to my previous conclusion that discussion boards not that great in MOOC environments.  There were multiple threads per game, and at the end of the week it really was a bit unwieldy to go through more than 60 pages of discussions.  Perhaps grouping discussion by games might work better, or if a topic has already been posted (i.e. game in this case), then you don't let people post on the same topic again.  You encourage people to participate in existing threads about this topic.  This way you don't have zombie threads going around with one or no replies. A discussion forum is useless if no one discusses.  It's nice for data collection, but not for discussion.

Another nice thing was that the instructors got their hands dirty, as much as they could anyway. Even if they can't be right there in the forums, they do acknowledge contributions of participants, especially those who help in the community, in the introductory videos for each week.  It's also interesting to see that they got rid of the "down vote"(or so they said) in threads because people didn't like it.  I'd be interested in seeing research on this, more specifically down-voting, up-voting and effect on participation.

It was also nice to see course creators encourage people to take their materials and use them in their own classes. With the exception of cMOOCs, I don't know if I've come across anyone in a cMOOC saying that their material is OER and encourage others to freely use it.  I know that I like the content, and some of their lectures are pertinent to some courses that I'd like to propose for my own department, so it's nice to see the option there.  I didn't see an easy way to download the material for my own use, so I guess I'll keep looking.

One of the anecdotes shared about kids playing civilization, I think it was in one of the lectures by Squire, reminded me of my high school experience with Bolo and networked Apple LC II on Apple Talk. This was done after school and we regularly had at least four teams of five playing for a few hours after school, along with the head of computer science at the time. There was cooperation among teams, learning the lay of the land, and learning strategy.  For me it was also an opportunity to improve my English since I had just returned from Greece and I needed more practice with the language in a variety of areas.

Finally, what amazed me (and I guess that I shouldn't be surprised) is that in the assignments people just didn't follow directions. For instance there an assignment to do a text analysis and at the end, mention where you got the source text from (what you analyzed), fill out the following info and write a little about why you chose the text and what surprised you
Text resource
The game the text is related to
K1 words: ____%
K2 words: ____%
AWL (academic) words: ___ %
Fry reading level: ___ grade
Many people I saw on the forum just posted their percentages and wrong reading levels, without any other information.  It's like there was no thought process at all in this.  Too bad, because if data in these fora are used for research, it may not be that useful.  Speaking of research I really did like that Squire and Steinkuehler invited people to participate in their research as data crunchers, co-authors, editors and so on.  They haven't figured out how this will work yet, but it's nice to see that such an offer was made.  I think that there is something to this collaborative research, as is evidenced by the work of my colleagues in the MobiMOOC Research Team, so I may be contacting them to see how I can help.

If you participated in this MOOC, what did you think?

Wednesday, November 13, 2013

Some Mid-Week #edcmooc thoughts & reactions

Take one blog, mix with others, add own thought
and see what happens
Over the past couple of days I've been reading what fellow participants have contributed to the blogosphere on #edcmooc.  I've watched the week 2 videos (more on that in post during the weekend), and I am slowly reading (or re-reading) some of the food-for-thought articles posted for week two.

To keep things manageable, I decided to devote only 3 days to fellow participant's blog posts, so I can move forward with other materials well.  In this blog post I wanted to react to a couple of things I read from fellow participants in the last few days.  First off, we have the Sage on the Stage (SoS). I was reading this short post on why the Sage is likely here to stay.  Interesting post, and it's got a couple of comments. I highly encourage you to read and think about it as well.  The gist of the post, from what I read into it, is that the Sage on the Stage instructor is here to stay because (1) people want to be instructed and (2) there needs to be teacher presence (see community of inquiry model).

Now, it's true that in educational or instructional settings, be they higher education, be they professional education, or be they athletics, people seek more knowledgeable others in order to gain from their experience.  We tend to associate this with the title "professor" or "instructor," and of course, what these individuals do is to "instruct." The fallacy, in my mind, is that we seem to associate instruction with a didactic  or "sit down, shut up, and listen."  This isn't the only way to instruct someone. Furthermore, teaching presence means different things to different people.  I take my "traditional" online course as an example.  Usually this course enrolls 15-20 students per semester. There will be students this this course that will need more frequent interaction with me as the instructor, and there will be students who won't need as much.  In a regular (15-20 student) environment it's easier to figure variable this out and to be able to address those student needs. In a MOOC, well... not so much.  Just because something doesn't conform to one way of interpreting teaching presence, that doesn't mean that there isn't teaching presence. So, sure, the sage isn't going anywhere, but the way we use SoS approaches will vary depending on where we need to apply this technique.

On a related note, I think courses are a good opportunity for students to come our of their comfort zone and expand their horizons a bit.  For example, if you have a student that always strives to get that attention, they need to be able to work and learn successfully in environments where they won't get it in quantities that they want it.  If students don't like working with peers, they need to stretch to be able to work with peers when necessary.  No one likes group work because it puts us at a disadvantage. We lose that flexibility of the home court advantage, and we need to communicate with others, and negotiate the outcomes. It adds overhead, for sure, but it does add value to the learning experience by exposing students to a variety of views. This benefits the learner in the end, if the learner is open to such differing views.

The other post that got my brain going this morning was Agata's post on her reactions to Noble.

4. In general, to me, this article misses the point. It gives example of students opposing online education - [emp] especially [emp] when purchased as a part of obligatory fee. In fact, that is the reason why in 1998 online education could not have had such an impact on learning at it does now. People did not have such easy access to different technological channels of information. Since they has to pay for everyhing - they were against. Since parents had to pay - they were against. It is surprising how invalid the arguments of the article are now - in the times when access to the internet is ubiquitous (thanks to wifi) and most people have twitter and facebook in their phones.
Perhaps the validity of this paper can be kept with reference to the developing countries? I wonder how they relate to the accessibility of online education in Europe and the USA.
I actually brought myself back to 1998, when I was an undergraduate studying computer science.  Back then I did have a really fast 56k modem (wow!) but I really didn't get online often from home because it cost a lot, and it was slow.  I did most of my browsing and downloading from campus where we had access to really fast internet.  In those days having an LMS was something that really wasn't useful because we didn't have the ubiquity of access that we tend to have today.  That said, even back then, there were technology fees that I paid to my school every semester (or was it every year?) regardless of access to an LMS for my courses.  This technology fee was used to keep the computer labs up to date, to provide for classroom infrastructure, to provide for classroom technology and so on.  Having to reallocate funds from one spending account to another doesn't seem like such a big deal if the technology fee doesn't increase.

The one thing that we should think about for developing countries is that developing countries are not what our "developed" countries were 10-20 years ago. One of the things that seems to be big in developing countries is mobile access.  So, if access to mobile broadband is cheap and ubiquitous, that's a really important variable that is different between them now, and us then.  This important variable can be used in ways to provide meaningful pedagogical tools that are enabled by technologies now. Back in 1998 we only had CSD (circuitry switched data) at 9.6kbps on some mobile phones. Not much you could do with that; but if developing nations have access to cheap 2G or 3G mobile networks, technology can play a part in meaningful technology enriched pedagogy.

Your thoughts?

Monday, November 11, 2013

#edcmooc: One man's dystopia...

Seems like Week 1 of #edcmooc is now done, and I've read (or in some cases reviewed) the readings and videos that they had posted as resources for Week 1. During the Week 1 live session recap and discussion there was an indication that there were 20,000 registrants for the MOOC.  I'd be interested in seeing how many of those 20,000 follow through and "complete" the MOOC, whatever "completion" means to the organizers of the course. For that matter, I'd like to know what "completion" means since, unlike other Coursera courses, there are no silly quizzes at the end of each week.

I understand that some people want some sort of formative assessment, but I tend to think that multiple choice quizzes are not adequate to indicate whether people "get it" or not.  I suspect that in this course there will be an "aha moment" around week 4 when it suddenly clicks for people.  If you are in #edcmooc, and are reading this blog post, my recommendation would be to go out an read other's blog posts, discussion forum posts, and then write your own blogs (or personal learning journals) to keep track of the thoughts in your mind :) Oh, and don't forget to comment on other's blogs and add your blog to the Edcmooc news feed.

One of the things that came to mind this week was that one person's dystopia is another person's utopia, or at least daily life which makes them neither happy nor unhappy. One of the short videos included in Week 1 was Inbox (see embed) which my colleague and I use when we co-teach a course on Multimedia in Instructional Design. My colleague and I include this video to have students brainstorm about the use of media and what the entire "package" conveys. It serves as a way to begin a discussion around a critical analysis of multimedia.



That said, this video got a lot of responses (as did the other videos for that matter), but there are two things that really stood out for me in the discussions:

  1. the mode of communication in this video is text
  2. there are no auditory utterances throughout this video

This lead to some discussion on how we may have moved from aural/oral communication to written communication being the norm, and I sensed that this may have been said with some sort of lament. On of the fellow participants said that her students at school, once the class period was over, just jumped onto their electronic devices and start texting (or tweeting, or whatever), and this was seen as a negative, at least from what I read.  the silence of the corridors was deafening.  I wish I had kept persistent URLs for those discussions.

In any case, it seems to me that seeing communication as an aural/oral affair is pretty limiting.  There are many instances when communication is anything but aural/oral.  For many deaf the main means of communication, even when two interlocutors are face to face, is sign language.  Sign language is also something used in sports and combat even among those who are not deaf. While a lot of communication is aural/oral, it's not the main mode.  Furthermore, I would argue, that communication these days can be predominantly textual. If you add up all of the hours that we experience oral communication such as chatting with friends and family, water-cooler talk, listening to the radio, or watch television (if we consider the aural element); and if we compare that to us reading books, letters from friends (aren't those nice when you get them in snail mail?), newspapers, reports, essays, blogs, webpages, and, yes, text messages and emails, I would say that we've been predominantly text-based for quite some time.

I don't want to pass judgement on this, for me it just is. Each individual and their individual situations will be affected differently based on a variety of elements.  For some, it's a bit of a dystopia because we are potentially losing that "human element." Then again for others it might be that connection that wouldn't otherwise be there if the technology didn't enable it.  It's hard, for someone outside of the situation at hand, to really be able to gauge whether this is "bad" or "good."  When it comes to technology shaping our lives, at the end of the day, I tend to be a on the technology equivalent of the Whorf-Sapir hypothesis: We shape the technology, it then shapes us, and we shape it again in an feedback loop. I guess others, including Winston Churchill and McLuhan have said similar things.

There were quite a few interesting points to all of the articles for week 1, but I really don't want to rehash everything from those articles.  The one article that really stood out for me was the First Monday article on Diploma Mills from 1998. I wasn't really paying attention to the higher education scene at the time because I was busy graduating high school and entering college.  For me, at the time, college was nothing more than compulsory schooling where you chose your career path and were looking for a job at the end of that journey.  It would be a understatement to say that I see schooling differently now, but back then I wasn't concerned with this "technology" thing and the threat of dumbing down higher education.

The thing that surprised me is that, even though this article is now 15 years old, the rhetoric used, and the fears expressed, are the same as those used in the Future of Higher Education working papers that completely go after online education now that MOOCs are the new thing that venture capitalists are looking toward for making some money, and, consequently, MOOCs are conflated with online education in general by those that decided to not play in the online education arena in these past 15 years.

Now, there are some comments made in the Noble (RIP) article that make sense, but seem to miss out on a number of points, even back then.
Experience to date demonstrates clearly that computer–based teaching, with its limitless demands upon instructor time and vastly expanded overhead requirements — equipment, upgrades, maintenance, and technical and administrative support staff
Yes, it's true that technology keeps improving, this equipment needs maintenance, upgrades, and support. However, the same is true of other technologies, including cars. I don't know anyone who would argue that because our roads stink and because we need to maintain them, otherwise potholes form, that's why we shouldn't use cars for day to day transportation.  I know it's a silly example, but a good parallel (in my mind) nonetheless.  That said, technology, and the demands on the faculty member's time, is actually a good thing in my book.  It's important for faculty members to develop succinct policies that they have for communication with their learners to keep the work from creeping into personal life.  That said, the old ways of coming into a lecture hall and talking for a few hours, and then having some office hours per week for student contact time are deader than a dodo (or at least should be).  Just as learners we don't learn from stale recorded lectures, so we don't learn from an instructor that just isn't present.  ICT have made an improvement in that arena as far as I am concerned (remember, one person's dystopia...)

Another interesting positition that has come back to make the rounds through CFHE:
Once faculty put their course material online, moreover, the knowledge and course design skill embodied in that material is taken out of their possession, transferred to the machinery and placed in the hands of the administration.
I've discussed this with fellow faculty members in recent years.  Your syllabus, or the notes you create for your class, are not why students are coming to your course.  People are coming to your course to interact with you, and to learn under your direction.  They aren't coming for the material.  If the material is all that they care for, they could have easily gone to the public library (or college library) and picked up the materials on their own and self-studied.  Your material means very little without you.  Now, that said, I do think that you should be compensated for the time and effort you put into creating an online course.  I know first hand that it is a time intensive endeavor and you should be compensated for it.  As far as copyright on a specific implementation goes...well a course can be implemented in many ways, and your way is just one way. At the end of the day, a collaborative approach to curriculum creation is the best way to go (at least according to me :)  )

As a side note for my fellow teachers: slapping a copyright notice on your materials and syllabus is tacky.  People will share the materials regardless, and won't credit you.  Just publish it under creative commons, non-commercial attribution, and be done with it :). As academics I think we ought to be just putting our material out there anyway and enriching the world. By publishing material under copyright you are making your material inaccessible, and, as far as I am concerned, going against the main mission of academics: to push knowledge boundaries.
Most important, once the faculty converts its courses to courseware, their services are in the long run no longer required. They become redundant, and when they leave, their work remains behind.
I think that this really strikes at the heart of the argument: fear.  Because people fear that they will lose their job, and they will be negotiating from a weaker position, they come up with all of these arguments against change.  Fear is a strong motivator (or demotivator), but at the end of the day they cannot take what's in your head. They cannot rob of your knowledge (unless they invent a "forgetful ray gun" and shoot it at you), so you have an ace up your sleeve.

Finally, a quick commentary on my own #edcmooc  participation in the forum --> I will just go in and up or down vote. forum items that I read  There is too much discussion to have my voice really stand out. I like reading what others read, but at the end of the day the coursera system isn't really setup to find people who you will follow through the duration of the MOOC, so I am limiting my "active" participation on blogs and twitter in true cMOOC style ;-)

Wednesday, November 6, 2013

EDCMOOC - Perhaps 3rd time is the charm?

A while back, when #EDCMOOC was getting setup for the first time, a fellow colleague, co-author, and member of the MobiMOOC research team, recommended the E-Learning and Digital Cultures MOOC offered by the University of Edinburgh. I think the school was his alma matter and he had good words to say about the organizers. This is always a plus.

Well, first time around I was too busy - I think I was actually too involved with other MOOCs to have the mental bandwidth to participate in #edcmooc. The second time I don't even remember what was happening (was I in summer mode?), so let's scratch that one off.  The third time is upon us! What the heck, I thought to myself, might as well sign up.  The Game Based Learning MOOC  is almost over, and I think I have the bandwidth for #edcmooc now.

Since this is the first week, I went in and had a peek to see how they've set it up. I have to say that this MOOC is, at first glance, doing well on a number of counts; something to really given them applause for.  First, the introductory videos give you a sense of who the organizers and facilitators are, and they are quite  up front that they will be getting their hands dirty with the course.  I think that this counts a lot and it really shows that the course will aim to have a teaching presence be there. This was a nice Community of Inquiry checkmark for me.  There is no need to respond to every single MOOC participant's discussion thread - but having a visible presence and leading by example are quite big factors in MOOC design and implementation in my book.

Another really nice thing is that up-front they address the potential for being overwhelmed with MOOC materials.  There is an attempt to on-board participants into the MOOC and help participants get more comfortable with the idea of MOOCs, the massiveness of the materials created, and prepare learners to be successful in this MOOC.  I'd honestly like to see what the attrition rate for this MOOC is (anyone collecting data or doing participant surveys pre and post MOOC?)

As far as materials are concerned, it's great that the videos are not the end of the materials, but rather serve as introductions to the topics.  The organizers actually go so far as to state this in their introductions.  I think this is important because up to this point, in xMOOCs, the videos have been treated at the material to study, with no other texts to accompany or supplement the videos.  I've peeked at the resources for Week 1 and saved a few things to Pocket for reading, and downloaded a few PDFs to read if I have time later on. It's nice to see academic articles selected from databases  like JSTOR and peer reviewed journals as part of the readings for a MOOC.  I know that the rights negotiation issue is a big deal, and these things cost, but interacting with readings and negotiating meaning is quite an important part of learning.

Finally, it seems that there are no silly little multiple choice test assignments - yes!  There is only one assignment at the end, which is optional, and that consists of a digital artefact that you create based on your understanding of the readings and brainstorming around certain topics. I am glad to see that there are no tests for the sake of having a test. In terms of participation, I am not quite sure what I will do.  I am probably going to be on twitter, and continue on this blog (if the 3rd time is the charm).  I've looked into the forums of the course.  Even though I am still not convinced that the forums are the best form of communication in these things, I did see quite a few of discussions that seemed elevated to me.  In previous xMOOCs I really didn't get a sense that the discussions were worth my time.  It just seemed like junk all around.  As a matter of fact the last MOOC where I felt that discussions were worth my time was MobiMOOC (and that was a cMOOC). I don't mean to sound like a jerk, but other xMOOCs I've been that had discussions either didn't grab me (subject matter wise), or there was such a bad signal-to-noise ratio that I didn't want to spend my limited time looking for something good to read and respond to. It also deterred me from posting some original thoughts on the forum because the signal would get lost.  Luckily this doesn't seem to be an issue with #edcmooc. I am getting a vibe that I can pick three or four discussions at random and all of them will be worthwhile to read and participate in. More on this at the end, I will report on this at the end of the MOOC to see if this was indeed the case.

That said, it seems that blogs are acceptable as a way to connect people in the MOOC, but there doesn't seem to be a main mechanism, like gRSShopper, to collect and aggregate these distributed sources, to it will be interesting to see what people come up with.

For an xMOOC, this MOOC seems quite unlike any other xMOOC.  I think people should take note!

So, this is my question to fellow participants: what is drawing you in, and if this is your 2nd or 3rd attempt at #edcmooc, what didn't work for you before?