Tuesday, October 29, 2013

Class Notes

Qualitative Methods 10.29.2013


We need to craft our websites into single documents with navigation and such. It should demonstrate expertise. We should decide what the objective of the portfolio is, and that should guide our work in making the portfolio. It can include our statements on our understanding and problems with the materials of the class. We should include our analyses of articles and notes from texts.

For the upcoming weeks, we should look at 14 and 15 in the Anderson text. They are about ethnography and writing.

What are the outcomes of qualitative research? Our goal is to reveal, construct, and correct. It is entirely possible to go through the steps and never come up with an argument or claim. Sometimes the data never coheres. The argument is the lowest level outcome. Coding reveals practices and patterns. It can reveal the things that are not durable (like looking at the 1943 Guidelines to Hiring Women). We can construct things, such as counter-narratives. We can also correct destructive ideas and narratives or expand and advance positive ones. The arguments central to Communication need to be re-enacted and refurbished.


When we are coding, we need to keep in mind intentionality, competence, modality, instrumentality, and effectiveness. The intentionality should be assessed on multiple levels—what parts are intentional and what are not. There is also always a level of competency. Texts do work, and that kind of work can be described as instrumentality. Finally, every text is effective to a certain level. 

Outside Reading: Park and Choi's The Internet as a Political Campaign Medium

Park, H. S., & Choi, S. M. (2002). Focus Group Interviews: The Internet as a Political Campaign Medium. Public Relations Quarterly, 47(4), 36–44.

In this article, Park and Choi conducted focus groups in order to understand the effect that websites have on perceptions of political candidates. In particular, they were interested in whether interactivity on the website translated into a feeling of having interacted with the candidate.

The study was conducted in 2000 during the Bush/Gore election. The participants for the focus group were undergraduates who were in advertising classes, all age 21 or older. For the study design, they asked participants to look at the websites of Bush and Gore for half an hour in a computer lab before convening the focus group. Once there, participants were asked questions about their experiences with the websites. The questions were formulated with the intent of using Media Richness Theory in the study.

From a methodological point of view, one of the problems in this study is that the researchers specifically chose to have participants look at the websites for a certain amount of time. While this can be appropriate if you are looking for the baseline effectiveness of some kind of campaign material, it does not seem entirely useful for the purpose of their research. When trying to make generalizable claims about how the voting public is affected by candidate websites, you have to take into account that many are not affected at all, and even further, that those who are affected by the websites sometimes spread that information to others. It is possible to have secondhand knowledge of a website. By priming their participants, Park and Choi lost all possible data about whether their participants had seen or heard about the websites at all. It seems like it may have been useful to first distribute a survey or to have participants look at the websites partway through the focus group.

All of this is not to say that Park and Choi’s study does not have merit. Having been done in 2000, the study fell at a particularly valuable juncture when the web components of political campaigns were first being explored. Park and Choi describe the responses of participants to both Bush and Gore’s websites; those who liked Bush’s website appreciated its simplicity and those who liked Gore’s website appreciated how well it covers all relevant issues. At this point in digital media development, we know that both approaches are extremely valuable. Though they eventually conclude that interactive features on websites are not perceived as the same as interacting with the candidates, they do identify early efforts at personalization as being useful (which were later used to great effect by the Obama campaign).

Saldana Ch. 5 and 6

Saldana's The Coding Manual for Qualitative Researchers

Ch. 5: Second Cycle Coding Methods

The Goals of Second Cycle Methods
The second cycle of coding is the point at which the myriad codes created in the first cycle get refined into larger codes or categorical labels. The idea is to begin finding trends or themes within the data which can eventually be assembled into a theory or model. We have to be careful that our categories are not too disparate; if they are, we are probably misrepresenting some things because data will not tend to hold such variety without overlap.

Second Cycle Coding Methods
Pattern coding: a meta-code that pulls the existing codes into a smaller number of codes or categories
Focused coding: looks for frequent or high intensity codes to identify salient features of the text
Axial coding: reassembles split data; looks for dominant and subordinate codes to create a categorical system that specifies the properties and dimensions of categories
Theoretical coding: Identifies a core category and formulates codes that relate the data to that category; can be theory-driven or emergent
Elaborative coding: Uses categories/themes from a previous study to interpret the data; strengthens the theory by seeing how well the prior coding set explains the new data
Longitudinal coding: Used for life-long studies; notes increase/decrease in study variables, accumulations, epiphanies, idiosyncracies, constancy, and missing data

Ch. 6: After Second Cycle Coding
It can be difficult to transition from fully-coded data to making theoretical claims. There are many different formats that research can take, so Saldana focuses on solidifying ideas.

Focusing Strategies
The "top 10" list: Use the 10 most interesting or varied pieces of data to illustrate the salience or breadth of your research
The study's "trinity": Identify the three most important concepts in your research and figure out the relationship between the three
Codeweaving: Using the major codes in your research, tell a story that structures the relationships between the codes
The "touch test": Look at your codes and find those that are things-- what can be touched? These codes need to be abstracted a level (mother to motherhood)

From Coding to Theorizing
Elements of a theory: Identify the if/then components or relationships within your coding
Categories of categories: Place your existing categories into larger categories

Formatting Matters
Rich text emphasis: Bold major concepts and italicize important assertions
Headings and subheadings: Use headings and subheadings to structure an argument

Writing About Coding
It is helpful to include as much information about the methods you used as possible. This includes everything from the data acquisition and sampling to the computer programs you used to code and manage the data. Make sure to emphasize major outcomes of the analysis.

Ordering and Reordering
Analytic storytelling: Tell the story of your analysis in a chronological way
One thing at a time: Write about concepts or categories one at a time, keeping them separate initially
Begin with the conclusion: If having trouble writing, begin with the conclusion

Assistance from Others
Peer and online support: Have peers (in person or online) look at your work and provide suggestions
Searching for "buried treasure": Have readers of your work look for important ideas or assertions that are not explained

Final Statement on Seniors and Technology

It seems that what needs to be done is to take somewhat of a marketing approach to this project. In particular, I think it might be useful to implement segmentation when looking at the possible audience of any technology help/resource center. 

From my experiences, there seem to be at least three major profiles that need to be addressed: the DIY user, the respectfully curious, and the non-user.

The DIY user is the type that I met most often at the senior center. They use technology to their own satisfaction. When they need to learn something, they learn it on their own through experimentation or targeted help-seeking (i.e., search Google for solutions rather than ask around or call tech support). To best support these users, centers should have technology available as well as product guides/manuals. I suspect that a DIY user would refer to a product manual if it is available in the same room as the technology, but they might not go asking for it at a centralized desk. This way, they can learn on their own either through experimentation or grabbing a manual and walking through the steps for using a particular piece of software. The most common complaint that I heard from these users was that the technology available at the senior centers is a bit outdated and slow.

The respectfully curious are the users that realize the potential available with digital technologies, but they are wary of experimentation. They have learned certain skills (such as checking email or buying things online), but they are hesitant to try new things out of the fear that they could somehow break something or make a mistake. It seems that this type of user would most benefit from classes or guided lessons. That being said, I've heard that technology classes at the senior centers are not well-attended, so there may be other ways to facilitate learning with these users.

Finally, the non-users are a diverse group. In STS, the non-user category is one that has been studied significantly, and most scholars advocate for further segmentation of this group. Often, this segmentation notes non-users that cannot access a technology, cannot afford a technology, have no use for a technology, or who willfully abstain from a technology. Though it seems that seniors who are non-users are often characterized as the last category-- willfully abstaining from a technology-- I suspect that they would consider themselves as non-users who have no use for the technology. One thing that is clear about new media/digital technology is that every function it offers was previously accomplished by other means, and nearly all of those other means are still available today. 

Though we know that certain things are moving toward the digital-only realm (an important one being medical records), we should be cautious about trying to orient a program entirely toward those who don't think they need it. It is important to anticipate a time when such a program will need to accommodate these non-users, but even then, the accommodation should be very targeted. For example, if non-users start running into the problem that their medical records are only accessible online, the program will need to provide help for that specific issue. It would be very discouraging to ask, "How can I find my medical records online?" and receive the answer, "Well, first, you need to take this basic computing class." Instead, if they are shown how to retrieve that information, it is much more likely that they might begin to see uses for the technology and move into the respectfully curious category.


Anderson, Ch. 1

Anderson Notes, Media Research Methods, Ch. 1

Communication domains include mass communication (study of messages sent to mass audiences through media technologies), media studies (media literacy), and mediated communication (all forms of communication that use some kind of mediation; anything but face-to-face).


Methodologies are standardized ways of producing knowledge. Empirical methods are concerned with experience and materiality, though not necessarily quantitative knowledge. Metric studies are quantitative, applying a logical framework to studied variables. Hermeneutics forms an alternative to metric study, systematizing interpretation for qualitative inquiry. Critical methods are used for revealing frameworks and systems of power and control. These areas of scholarship overlap in certain places, creating epistemologies. Metric empiricism describes material reality through measurement, with the assumption that reality is stable and objective. Interpretive empiricism is also concerned with material reality, but it is accessed through lenses of interpretation that reject objective reality. Critical-empirical scholars are interested in applying critical/cultural insight, but reject the singular truths or narrative put forth by earlier critical scholars, instead choosing to employ grounded theory with a critical agenda. 

Tuesday, October 22, 2013

Class Notes

Qualitative Methods

Analytic Review of articles

We need to ask questions such as:
  • Is the problem clearly stated? 
  • Is the approach appropriate to the problem? 
  • Is the method adequately described so that the quality of the work can be assessed? 
  • Does the analysis account for the data set (does it poach or cherrypick)? 
  • Are the claims made supported by the evidence (are they self-fulfilling or contrary)? 
  • Do the conclusions cohere with evidence and with the problem?
With methodology, there needs to be express systematicity; simple textual analysis is not enough. We need information on units of analysis-- how many and
what percentage?  The level of coding should be specified as well.

We should be suspicious about any instances without contrary cases. We can also assume that the author is lying-- we have to be skeptical as long as the author cannot maintain their case.

Implicative Review of articles

We need to ask questions such as:
  • What value is the study for you?
    • Have you learned something about the problem or the approach? The methodology?
    • Has it taught you about practical processes you can emulate?
    • What was the argument structure or writing style?
    • Do we find value in the evidence or the way the evidence is handled?
  • Does it contribute to the discipline?
    • Is it innovative methodologically?
    • Does it make a contribution to theory?
The half-life of communication research is about 5-years. After that, the citations rapidly drop off.

Saldana's Ch. 3 and 4

Chapter 3 of Saldana covers quite a lot of specific coding methods. Chapter 4 then concerns itself with transition between the first and second cycle of coding. 

We need to think about our ontology going into a project. It may be possible to try on other ontologies, deciding what fits best for you and your project. The theories you use are also impacted by ontology.

While we can use multiple coding schemes, we ought to decide on them before starting the work. We need  to be very certain of the theories we use.

When looking at interviews, we might want to code the questions as well-- they can expose agendas or ideas that are inserted by the interviewer.

Data does not force us to write in a particular way; science is a rhetorical exercise. In this way, we cannot really be insincere. We need to persuade a known audience, and if the audience wants numbers, we should provide numbers. It is simply an act of persuasion through the available means. There is no real difference between codes and numbers. The major difference is that most metrics measures use a priori coding, whereas qualitative work tends toward emergent coding.  Coding is a whole-body exercise; you can get tired and have definition drift.We cannot presume that our respondents are naive or unaware of our agenda.

When coding absences, we look for moments of transition. If there is a line of thought that suddenly changes to another topic, we know that there is an absence. Every single thing that is said prevents other things from being said; every presence is an absence.

Senior Center Project
We will eventually put together a statement of what we've learned from our interviews. These will be collected and given to the AARP. The project uses the EDDIE model: engage, discuss, decide, implement, and evaluate. 

Monday, October 21, 2013

Outside Reading - Kazmer & Xie's Qualitative Interviewing in Internet Studies

Kazmer, M. M., & Xie, B. (2008). QUALITATIVE INTERVIEWING IN INTERNET STUDIES: Playing with the media, playing with the method. Information, Communication & Society, 11(2), 257–278. doi:10.1080/13691180801946333

Kazmer and Xie’s “Qualitative Interviewing in Internet Studies: Playing with the Media, Playing with the Method” is a methodological meta-analysis of internet-mediated semi-structured interviews. For the purpose of the study, they divide interviewing methods into the following groups: face-to-face, over the telephone, through email, and through instant messaging (IM).

One of the interesting points that Kazmer and Xie bring up is the idea of contextual naturalness—in an interview, the participant should be able to express themselves the way they want to. For some people, internet interviewing can hinder this; for others, it is an ideal environment. Like Tracy’s discussion of internet interviewing methods, Kazmer and Xie mention the difference between synchronous and asynchronous environments; this provides the conceptual difference between email and instant messaging. Whereas email is generally asynchronous, IMs are meant to be real time. I would possibly argue that IMs provide a level of asynchonicity anyway. After all, in an IM, you are still free to take your time, compose and recompose.

A common thread in analyses of internet facilitated interviews is that participants have a tendency to disappear or miss scheduled interviews. It is difficult to schedule synchronous interviews online and asynchronous interviews are more likely to be forgotten by participants.

Another point that Xie and Kazmer note about online interviewing is that there are certain technological considerations to be made when saving or recording interviews. There are only so many things that actually can be recorded, and the internet strips the situation of most environmental and visual data. Also, when doing online interviewing, both the participant and the researcher are left with complete records of their interactions.

After interviews are conducted, it is up to the researcher to decide how to assemble and clean data. Often email and IM interviews can include attachments—photos, documents, or other files—and there is a question of how and where to include these materials. IMs are also problematic because they are susceptible to conversational disorder—where questions are asked and answered out of order.

Despite the drawbacks, Kazmer and Xie have found that internet interviewing is becoming increasingly common. It allows for a wide participant base, and it is easier to find interviewees, generally. The problems they note are not insurmountable, and it is up to the researcher to decide how to represent all data.

Saldana Ch. 3

The Coding Manual for Qualitative Researchers, Ch. 3
First Cycle Coding Methods

The first cycle of coding is the initial coding of data. Saldana identifies seven types of first cycle coding: grammatical, elemental, affective, literary, and language. Each of these types has several particular coding methods associated with it. The second cycle of coding is characterized as more analytic, and it includes classifying, prioritizing, integrating, synthesizing, abstracting, conceptualizing, and theory building.
Choosing the right type of coding is dependent on several factors, and the choice should be guided by your research question, your paradigmatic and methodological choices, and exploratory work with your data set. Saldana suggests a generic coding method for figuring out the right type of coding for your data, begging with attribute coding and followed by structural/holistic coding, descriptive coding, and in vivo, initial, or value coding.

Grammatical Methods
Attribute coding: Used for nearly all studies, attribute coding goes before the text and lists all relevant attributes to the data set, including qualitative and quantitative properties.
Magnitude coding: This adds an alphanumeric measure of intensity to another coding scheme
Subcoding: Creates a secondary code to accompany a primary coding system. The subcode is hierarchically below the status of the primary code.
Simultaneous coding: Utilizes a second code of equal standing or importance to the primary code, at the same time on the same texts

Elemental Methods
Structural coding: Codes related to questions asked during interviews/recurrent topics from participants; especially useful with large numbers of participant responses following a similar structure
Descriptive coding: Creates codes related to the topic of qualitative data. This is different from codes referring to the content.
In vivo coding: Uses the language of the data to code instead of labels chosen by the researcher
Process coding: Codes through use of gerunds—what the language is doing conceptually or what an observed participant is physically doing
Initial coding: Open-ended approach that breaks data down into parts and compares them; iterative

Affective Methods
Emotion coding: Labels emotions experienced or recalled by participants
Values coding: Codes values, attitudes, and beliefs expressed in participant responses
Versus coding: Codes for binaries, dialectics, and rivalries
Evaluation coding: Used to evaluate programs, evaluation coding makes value judgments (non-quantitative) and attempts to describe, compare, and predict success from a program

Literary and Language Methods
Dramaturgical coding: The application of dramatic elements to qualitative data (not just Burke’s pentad), drawing from Goffman and impression management
Motif coding: Uses index codes for classifying folk tales, myths, and legends; the motif is the smallest unique unit in the story
Narrative coding: Open form of coding whatever the researcher considers a narrative; done from a literary perspective
Verbal exchange coding: Aims at finding a generic form of conversation. Codes precise transcripts of conversations as phatic communication, ordinary conversation, skilled conversation, personal narratives, and dialogue.

Exploratory Methods
Holistic coding: Looks for themes and issues by taking in qualitative data as a whole rather than analyze line-by-line.
Provisional coding: The creation of a set of codes before analyzing the data based on existing knowledge of the texts
Hypothesis coding: Coding done with a predetermined set of codes aimed at testing a particular hypothesis

Procedural Methods
Protocol coding: Protocol coding occurs when all coding is done by a rigid, preset system rather than an open or iterative process
Outline of Cultural Materials (OCM) coding: Coding that follows the OCM, a topical index for anthropologists and archaeologists
Domain and taxonomic coding: An ethnographic type of coding, domain and taxonomic coding looks for cultural knowledge; it separates processes into steps, looks for cultural categories, and identifies semantic relationships (strict inclusion, spatial, cause-effect, rationale, location for action, function, means-end, sequence, and attribution).
Causation coding: Looks for causal beliefs in qualitative data; notes dimensions of causality, including internal/external, stable/unstable, global/specific, personal/universal, and controllable/uncontrollable

Themeing the Data

A theme is a phrase or sentence that identifies what a unit of data is about or means. Creating themes means finding groupings of implicit ideas among the coded data. 

NVivo Experimentation 2

For this week, I decided to use NVivo to put together my weekly example for my online Communication Criticism course. When I've taught the course in person, it has been easier to teach coding because I can just give everyone a paper copy of some kind of text, and they can code by hand. In the online environment, it's been harder to do this-- all of my visuals are either static images, blocks of text, or animated PowerPoints. I can't just explain coding and hope that they try it on their own.

This upcoming week, I'm teaching Narrative Criticism. By using NVivo, I can go ahead and code several texts by a single author, and when it comes time to create the example, I can use a screen capture program to show my students coding and some of the data outputs. I don't expect them to use NVivo themselves, but it's a more coherent way to show coding than trying to use a multi-color highlighting scheme in Word.

The project I created includes three stories by David Sedaris that have been broadcasted on This American Life. Following Sonja Foss's explanation of Narrative Criticism, I am going to code the texts for theme; I have chosen to focus on separation, loss, and lack. I have also run word frequency analyses and generated a few other outputs (I like making visuals).

The project file can be found here.

Saturday, October 19, 2013

Tracy Ch. 8

Qualitative Research Methods

Interview Practice: Embodied, Mediated, and Focus-Group Approaches

Negotiating access for interviews

Getting participants to agree to an interview is extremely important. Some types of people are more likely to agree than others, particularly when the interview is about sensitive topics. Researchers should try to start a relationship with the person they want to interview before asking for the interview. This increases the chance that the participant will agree. After an interview is agreed upon, the researcher should send a reminder to the participant that includes information on how to contact the researcher if something goes awry on the day of the interview.

Conducting face-to-face interviews

When scheduling an interview, it is important to choose locations that are accessible and public, but provide a degree of privacy. They should feel comfortable and safe to the interviewer and interviewee. Recording devices should be tested out beforehand, and locations should be chosen that are quiet enough to use a recording device. When the participant arrives, you should offer to get them a drink or something to eat (if available). It is also useful to take site notes and cursory information about the participant before the interview begins. Scheduling should be done in a way that leaves time for briefing and debriefing.

A good interviewer does more than simply ask questions-- they should prepare themselves to be knowledgeable about the topic at hand and the participants; they should be gentle, forgiving, sensitive, and open-minded; they should structure the questions to be probing and show  their attentiveness; and they should actively interpret what the interviewee says. By repeating certain ideas back, interviewees are able to correct interpretations or provide more nuance. Researchers should be mindful of their non-verbal communication and refrain from anything that would connote judgment or shock. At the end of the interview, researchers should be appreciative. It is especially useful to shut off any recording device and ask if there is any information the interviewee is willing to share now that the recorder is off. When the interviewee leaves, the researcher should take notes on anything that was notable that was not captured by the recorder.

Technologically mediated approaches to interviewing

Technologically mediated interviews can be synchronous (real time) or asynchronous (reply on own time). There are significant advantages to online interviewing-- it is cost effective and can reach a wide participant base, it can increase engagement, it can be more flexible, it can change power dynamics that are present in face-to-face communication, it can provide data that is unavailable face-to-face (grammar, typing, deaf or heavily-accented participants), and it does not need to be transcribed.

There are also disadvantages to technologically mediated interviews. They provide no nonverbal data, and they require participants to have access to a computer as well as the relevant technological skills. Participants can more easily become distracted while interviewing or they can drop off of the project entirely. There are more confidentiality risks on both the part of the interviewer and the interviewee, and it is nearly impossible to verify identities.

The focus-group interview

Focus groups traditionally bring together 3-12 people who are taken through a guided group discussion. Focus groups can be particularly valuable, especially when they are able to create a group effect-- where responses become chained, and members of the group leave with a new understanding of the topic through their interactions with others. It also provides a space to observe vernacular language and communicative interactions. The focus group can serve as a way to gather creative data; researchers can ask participants to rank concepts, draw or create pictures or collages, create metaphors, or act out scenarios.

Members of a focus group should have a key characteristic in common. It can be valuable to have oppositional points, but the group can be hard to manage if there are not meaningful commonalities. On the day of a focus group, you should provide comfortable accommodations for the participants, and before things begin, you should provide them with informed consent and make it known that the event will be recorded and/or observed.

When moderating, it is important to know the discussion guide so that the discussion can be moved in the correct direction. You should not take notes, but rather focus on providing positive and supportive feedback. This can also include paraphrasing and asking for clarification. It is important to provide breaks every hour-- during this time, natural conversations can arise. Closing the focus group should consist of reminding participants to keep the conversations confidential. Afterwards, you should take notes on important details and make a to-do list for the next focus group.

Overcoming common focus group and interviewing challenges

Some challenges that arise during interviews can come from both the interviewee and the interviewer. The interviewee can espouse unexpected behaviors that make it difficult to schedule interviews or record them. Interviewers can talk too much, interrupt, or use problematic formulations-- interpretations of the interviewees responses that gloss over or omit certain information for the sake of simplicity. They also have to be aware of any problems in the interview guide and be mindful of tangents. Some researchers have problems with probing and creating follow up questions; they should aim to ask for clarifications, deeper meanings, or greater detail. When participants become emotional, it can be difficult to respond. Tracy advocates trying to identify  with or show compassion to the participant. Researchers are also often unaware of what to do if an interviewee is deceitful. To combat this, you can build redundancy into the question guide or refer to others with whom you've spoken. Not all lies are useless, either, because often it can be strategic.

Transcribing

There is no universal transcription method or set of symbols-- these are constructed by the researcher and meant to fulfill whatever purpose they are put to. The choice of what details to note comes down to what the study is examining. More detailed is not always better. It can take quite a long time, and the more people that are speaking, the longer it takes. Transcriptionists have to make choices about things like punctuation, laughter, pitch, tone, and homonyms. Fact checking transcripts requires that you listen to the interview while reading the transcript to make sure that it does not omit (or add) anything. Voice recognition software can also be used.

Tracy Ch. 7

Qualitative Research Methods

Interview Planning and Design: Sampling, Recruiting, and Questioning

The value of interviews

Interviews should be thought of as guided conversations, and we need to remember that they are not always face-to-face, one-on-one. Interviews can occur through various media and can involve small groups as well. The goal is to access subjectively lived experiences and viewpoints related to a particular topic. Because the interviewer has power in the situation, it is up to them to treat the interviewee and the resulting data ethically. Interviews should be seen as a process of mutually creating a story or meaning, and so the process of conducting an interview is just as important as the data collection.

Interviews provide the unique ability to gain access to motivations, thoughts, and reasoning that would otherwise be unaccounted for. Furthermore, they can provide information about things that cannot be otherwise observed-- private or past experiences. Interviews are also a good opportunity for getting more information about observations; they can act almost as a member check.

Who, what, where, how, and when: Developing a sampling agenda

The sampling plan or sampling agenda is a proposed way to choose sources for interview data. In qualitative work, most researchers use purposeful sampling, which chooses participants and data that will work especially well with the goal of the research project.

Random samples - Every member of a population has equal chance of being interviewed (or otherwise have data collected).
Convenience/opportunistic sampling - This type of sampling is very common, and it involves using participants that are convenient to the researcher. It is often quicker to use this type of sampling, although it is not often considered enough data for a full study.
Maximum variation samples - When there is significant variation within a population, researchers may choose to portray this variation through purposefully choosing participants from the far ends of the spectrum.
Snowball sampling - This type of sampling is good for reaching difficult-to-access or hidden populations. It begins by choosing a few participants and then asking them to suggest friends, family members, or colleagues who may be open to participating.
Theoretical-construct samples - When working with a theory that requires participants to meet certain goals or have certain characteristics, theoretical-construct sampling purposefully chooses those people.
Typical, extreme, and critical instance sampling - Typical instance sampling looks for average participants; extreme instance sampling looks for participants who exhibit a chosen characteristic in the extreme; and critical instance sampling requires participants that have experienced something in particular (which may be extreme).

A sampling agenda can often be determined through your research question and chosen theories. If doing participant observation, it is often a good idea not to set a strict sampling agenda very early-- doing so can restrict your ability to observe what the field has to offer.

Interview structure, type, and stance

Structured interviews are those that repeat the same questions for each participant. This is often done with the help of an interview schedule. These are most useful when using a large sample that needs to be analyzed. These types of interviews may be good for quantity, but their depth suffers.

Unstructured interviews are far more flexible. What structure they have is provided by an interview guide-- it goes through required topics, but not specific questions. Unstructured interviews require interviewers to be more skilled and empathetic; they need to be able to pick up on emotional cues and keep the conversation on the research topic.

Types of interviews: ethnographic (informal, emergent, instigated by researcher), informant (interviews with those who are savvy in the scene, requires long-term relationship), respondent (group of participants with similar subject positions, speak only for themselves), narrative (open-ended, participants tell stories, related to oral history), life-story or biographic (includes stories, asks about life as a whole, provides understanding or empathy), and discursive (focuses on structures of power and their effects on knowledge, asks how participants navigate power).

There are multiple stances for the interviewer to take. Deliberate naivete falls between overt honesty and deceit, with the interviewer losing any presuppositions and judgments. Collaborative or interactive interviewing places the interviewer and participant on the same plane, with each interviewing the other. Pedagogical interviews ask for expert knowledge or emotional support. Responsive interviewing "suggests that researchers have responsibilities for building a reciprocal relationship, honoring interviewees with unfailingly respectful behavior, reflecting on their own biases and openly acknowledging their potential effect." The friendship model is similar to this, coming out of feminist scholarship. When interviewers deliberately provoke the people they are interviewing, the technique is called confrontational interviewing.

Creating the interview guide

Interview questions should not use jargon or overly-complicated language. They should focus on one topic at a time, and they should be open-ended. Questions should also be neutral and not lead to particular answers. They should uphold the participants' identities, and they should be followed by appropriate probes for further information.

Interviews should be sequenced as follows:
Opening - Breaks the ice, sets expectations, and begins with experience-related questions.
Generative questions - Questions that generate frameworks for the interview.This can ask for descriptions of experiences or processes that act as tours. They can also include examples or timelines. Hypotheticals can also work here. Behavior and answer questions can use past experiences to ask for factual reports of behavior. It is also productive to ask about participants ideal visions of certain things. This is also the part of the interview to ask participants to compare and contrast things, ask about their motivations (or others' motivations) and for predictions about the future.
Directive questions - Direct questions call for specific type of answers, and these direct the trajectory of the interview. Close-ended questions require very specific answers. Typology questions can ask for respondents to categorize certain knowledge, and elicitation questions prompt discussion by showing some kind of stimuli (video, image, text, etc.). Data-referencing questions ask participants for commentary on data gathered by the researcher in the past. Member reflection questions can be used to get participants' opinions on observations made by the researcher, and devil's advocate questions make participants respond to skeptical positions. This is also the point to use what may be potentially threatening questions. They are best left for the end of the interview in case they somehow effect the participant.
Closing the interview - At the very end, it is appropriate to ask catch-all questions and questions that enhance the identity or self-perception of the interviewee. You can also ask at this point if the interviewee has a preferred pseudonym.

Tuesday, October 8, 2013

Class Notes

Qualitative Research - 10.8.2013

Saldana's coding manual is much more geared toward being reference, whereas Tracy's work is concerned with an overall understanding of qualitative research. The coding manual also advocates for using methods only as long as they are useful. We should allow for the fact that some coding schemes are going to be emergent-- the first coding is never final.

The multiple interpretations that can come out in coding a text make the conflict in mixed methods apparent. If a single text can have multiple interpretations, then it can't have an objective, measurable truth.

There are two types of coding: reductionistic and rhizomatic. The rhizomatic model is emergent-- it has offshoots and breakaways. It enlarges the thing you're looking at. Reductionistic uses codes that stand for other things. It concentrates the thing you're looking at.

We need to be careful when we're coding-- we shouldn't use ad hoc abbreviations because we'll likely forget them or misinterpret them later.

We begin with the problem. The more specific we can be with our problem statement, the more directive we can be with our research. Working with NVivo, the more specific problem statement leads to early classification schemes, whereas more iterative work means that your classifications in NVivo might change or multiply.

There are source and node classifications. The source classifications can have sub-categories. It is done when you bring in the source. If we are going to classify a lot of items as the same thing, we can set that designation as a default. Source nodes are used for coding that accommodates the entire source. In other words, they are designations that apply to the whole of the source, rather than a coding that would get used within the source.

After deciding on a problem, we need to choose our texts. We need to presort these texts, then classify them in NVivo.

My most recent save file for NVivo is here.

Monday, October 7, 2013

Article Analysis

Looking at the I Quit! article, there are several positive and negative practices influencing the quality of this study. On the whole, the article is well-written and easy to read, aside from a single word omission error (a missing "a"). It follows a fairly well-trod formula of literature review>methodology>analysis>discussion, though the implications have been thrown into the conclusion. I probably would have made that part of the discussion.

Aside from composition, I have some questions about the methodology. Interviewing definitely seems like the right choice for the kind of information this researcher is trying to find. The semi-structured interview is also a good choice, since it directs the interviewee to the necessary topics, but it leaves the process open to emergent topics. The questions themselves, listed in the Appendix, do seem a bit leading. The first set asks the questions "What did you anticipate the job would be like?" and "What was the job really like?" in succession, which automatically reveals an assumption that the participant might have left their job because of a difference in job description and experience. It also flattens the process, in a way, assuming a much more short-term or homogeneous experience. Admittedly, with the way these interviews are conducted, the participant is still free to say, "No, it's more complicated than that." 

I also noticed that the questions were aiming for a much deeper conclusion than the paper asserts. While the main point of the article is that there are specific types of groups that participants told they were quitting and a particular order, the questions seem very concerned with the particular message. The article seems to touch a bit on the tenor of quitting messages given to each group; it does not, however, go into this in depth.

There are also some unstated issues here regarding the participants. When the researcher lists the variety of positions held by respondents, there are only two that could be classified as anything but white collar work. A reason for this could have to do with the sampling method which used participants from the researcher's social circle. I suspect that the process of quitting can vary dramatically from different levels and types of work. It is also likely affected by whether the participant has another job lined up, and if not, how difficult they expect it will be to do so. These expectations can make a great deal of difference when someone is concerned with being able to provide a reference in the future. All of this could be addressed (and absolved) in a Limitations section.

Finally, I think the article could use to build up its rationale. While it does demonstrate that quitting behavior has not been studied enough, it needs to make a more explicit case as to why we need to know when and with whom people communicate when in the process of quitting a job. 

Working with NVivo10

I decided to start working on a project for another class-- Rhetorical Theory-- with NVivo 10 to organize my sources. For this project, I'm looking at the rhetoric surrounding the OUYA gaming platform, including the rhetoric of creators, users, and the console itself (understood as user script).

I began by installing the NCapture extension to my browser and capturing the OUYA Kickstarter page as a .pdf. I was able to open this in NVivo, and I coded it for several different recurring themes-- revolutionary language (anything that described shifting power balances), money/economic language, the ideas of hacking (changing the system against the creator's specifications) and experimentation (changing the system in accordance with the creator's specifications), and the system's relationship to other gaming and technology companies. Coding this wasn't difficult once I found the options to highlight and create colored ribbons. I think I need the visual feedback so that I know my codes are being logged and counted.

One of the first problems I encountered was with the video embedded in the Kickstarter page. Kickstarter uses a video player that is not recognized by NCapture or my installed video capture extension. I was able to find the same video on YouTube, and before I realized that NCapture could take YouTube videos, I downloaded it and converted it to AVI myself. I imported the video, but I am having trouble coding it. The pause/play/rewind features are also a bit clunky, especially when you want to capture things that only last a few seconds. I'm definitely more used to the keyboard controls found in Final Cut software, Quicktime, and other applications.

For the time being, I put the video coding on hold when I figured out that I could capture all of the comments on the YouTube video I downloaded. I like that NCapture stores these like a database and natively supports the thread structure (reply format). Strangely, NVivo is showing me a different number of comments available than YouTube does, but there might be discrepancies because the comments on YouTube are split into two different pages. Whenever I try to capture these pages separately, I get the same data when I import it into NVivo. I'm not sure what's happening there, but since I'm getting about 690/730 comments, I'm not hugely concerned about the missing 40. I'll still be playing with this in hopes of figuring out just what's happening.

So far, I have coded about 200 of the available comments. I had been coding words, phrases, and fragments of sentences, but I suspect that the better way is to code comments as a whole. That gives a far more unified unit of analysis. If I did that, I would then just add a code for "unrelated," which would get used pretty often in this particular thread of comments. It is, however, already noticeable that the focus of the YouTube commenters is vastly different than the focus of the video or the OUYA Kickstarter page.

From here, I intend  to work harder on figuring out video coding. I would like to eventually record some captures from the OUYA itself (I own one), which I can code as user scripts (in other words, what the device encourages users to do and what is discourages). That work would almost have to be done as video. I can also foresee coding images, because the packaging of the system includes some interesting language use as well.

My project file can be downloaded here.

Tuesday, October 1, 2013

Class Notes

In the upcoming weeks, we need to post one outside reading, one experiment/experience with NVivo/qualitative research, and a post addressing questions and concepts about qualitative research.

The assignment for this week is to write a review of I Quit, a text available in the class files. We should look at this as if we are reviewing it for a journal. It must remain confidential.

We also need to post our notes on Anderson.

AND we also need to rewrite our guesses about the wood shavings, redefining them in terms of how the wood shavings could qualify as a gift for a 50-year-old daughter. This should include a discussion of the cultural implications of gift giving. 

 We need to remember the difference between field notes and site notes. Field notes are the first level of abstraction. We're looking for archetypes within the situation-- power users are one example of a type of person we might encounter several times. Site notes should be written within an hour of the observation, and field notes should be written several days later.

Coding occurs in several cycles: primary cycle (what is present in the data) and secondary cycle (first abstraction, themes, how and why).

Tracy identifies eight major aspects of studies that help us decide if it is worthwhile: worth, rigor, sincerity, credibility, significant contribution, resonance, ethics, and meaningful coherence. Worth can include the tangible takeaways, social impact, and the answer to the "so what?" question. In order for our research to be sincere, it must be able to fail. This is somewhat akin to the falsifiability principle. For us, resonance takes the place of generalizability.

The coding that Tracy covers focuses largely on ethnographic coding. In terms of our own project, we have already framed our own work-- we know what we are looking for and how we're looking for it. Our work begins with a deep understanding of the study. This is followed by selection of texts and a close reading of the texts. From this, coding categories emerge-- underlying constructs with multiple values/codes. Categories are not themselves codes. Categories can emerge from the questions we ask. With codes established, we can figure out the unit of analysis. These can be formal features (sentences, paragraphs) or more difficult things like thoughts, ideas, and themes. Because not all texts you look at will meet the requirements you set up for a case, you must go through and validate cases before beginning coding.

The best way to deal with multiple cases in NVivo is through multiple files. Files can be in any format, though it works best with Word and text files. PDFs should be OCR optimized. The titles of our files should include as much information as possible-- date collected, location, type of data. There are three major classifications - source (these classifications go across the entire case and are known before any reading/coding of individual texts), node (classifications used across units of analysis), and relationship classifications. Every unit of analysis must have a code, even if it is unassigned; this is a show of our trustworthiness.