Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice
Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice
Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice
Ebook901 pages13 hours

Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This second edition of Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice gives behavior analysts and other behavioral practitioners pragmatic advice, direction, and recommendations for being an effective clinician, consultant, supervisor, and performance manager. Like the first edition, the book includes chapters on evidence-based practice competencies as well as many new areas devoted to professional development, technology, and telehealth service delivery. Written by expert scientist-practitioners, each chapter is filled with guidance that follows from the most contemporary research support.

  • Focuses on professional practice areas required among behavior analysts
  • Includes forms, tables, flowcharts, and other visual aids to facilitate practice
  • Presents the most current guidelines for established ABA methods
  • Emphasizes the research basis for practice recommendations
  • Helps readers build skills and competencies that broaden scope of practice
  • Covers emerging topics of telehealth, technology, adult learning, and sports fitness
LanguageEnglish
Release dateMar 3, 2023
ISBN9780323995955
Applied Behavior Analysis Advanced Guidebook: A Manual for Professional Practice

Read more from James K. Luiselli

Related to Applied Behavior Analysis Advanced Guidebook

Related ebooks

Psychology For You

View More

Related articles

Reviews for Applied Behavior Analysis Advanced Guidebook

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Applied Behavior Analysis Advanced Guidebook - James K. Luiselli

    Preface

    This second edition of Applied Behavior Analysis Advanced Guidebook includes practice domains that have continued to evolve over the years to become more refined and thus remain the core competencies for behavior analysts and other behavioral practitioners. This new guidebook also includes more recent developments such as telehealth modalities and technology-assisted services, with an emerging emphasis on ethics, diversity, multiculturalism, and expanded practice options. All chapters of the guidebook trace the historical basis for the topics reviewed, underscore the evidence support, present practitioner recommendations, and suggest ways to advance research-to-practice translation.

    My hope is that this guidebook captures the fast-paced evolution of ABA applications in children, youth, and adults; contributes to professional development; and improves organizations responsible for education, treatment, and client care. Contemporary behavior analysis not only remains firmly grounded in foundational principles (Baer, Wolf, & Risley, 1968) but also reflects new and innovative thinking while continuing to be driven by data, which point to context-informed change and respect for the attitudes and opinions of valued stakeholders (Wolf, 1978).

    I have been blessed with the guidance, direction, and good advice from many people who, whether they know it or not, have made this book possible—thank you Donald, Van, Carol, Gene, Jerry, Ned, Anne, Jill, Spencer, Ray, David, Warren, Ron, Paul, Michel, Nirbhay, Gary, and Joe. I am indebted to Rita, Frank, and Helena for the opportunity to collaborate and share the inspired work we are doing. With gratitude and love, I dedicate this book to my family, Tracy, Gabrielle, and Thomas, and our feline friends, Ellie, Bunny, and Sophie.

    James K. Luiselli, Clinical Development and Research, Melmark New England, Andover, MA, United States

    References

    Baer D.M., Wolf M.M., Risley T.R. Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis. 1968;1:91–97. doi:10.1901/jaba.1968.1-91.

    Wolf M.M. Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis. 1978;11:203–214. doi:10.1901/jaba.1978.11-203.

    Section 1

    Practice competencies

    Chapter 1: Preference assessment and reinforcer evaluation

    Judah B. Axea; Christopher A. Tullisb; Caleb R. Davisa; Mei-Hua Lia    a Simmons University, Boston, MA, United States

    b Georgia State University, Atlanta, GA, United States

    Abstract

    Given the prominence of positive reinforcement in intervention programs based on applied behavior analysis, therapists need effective ways to determine what items or events function as reinforcers. This is accomplished by conducting preference assessments, and there are six major types: single stimulus, paired stimulus, multiple stimuli with replacement, multiple stimuli without replacement, free operant, and restricted response. This chapter describes these methods, as well as considerations for selecting preference assessments and conducting them efficiently. An additional use of preference assessments is evaluating the social validity of interventions and contexts. This is useful when allowing clients to choose interventions, living arrangements, employment opportunities, and social and leisure activities. Additional topics raised in this chapter are cultural considerations for conducting preference assessments and teaching people how to conduct preference assessments.

    Keywords

    Preference assessments; Applied behavior analysis; Social validity; Interventions; Living arrangements; Cultural differences

    Introduction

    Positive reinforcement is the most basic principle and procedure in applied behavior analysis (ABA). Skinner (1938, 1953) discovered and defined positive reinforcement as when a stimulus repeatedly follows a type of behavior resulting in an increase in the future frequency of that behavior. For example, a therapist working with a child with a disability may be teaching the child to initiate conversations with peers. If the therapist presents a positive reinforcer, such as a high-five, immediately after each instance of initiating a conversation, the future frequency of initiations will increase. But how does the therapist know what will function as a positive reinforcer, particularly for clients diagnosed with autism or other developmental or intellectual disabilities who have intensive needs and limited language?

    There are several ways. First, a therapist could deliver an item immediately after instances of a certain behavior and record whether the frequency of that behavior increases. This is the most direct way to identify as stimulus as a reinforcer, but it tends to be time consuming in practice. Second, a therapist could ask a client or their caregivers what functions as a reinforcer and/or observe what the client interacts with but these methods are often unreliable. Third, a therapist can offer items to a client and observe which items they select and engage with or consume. Behavior analysts often use this third method, termed, preference assessment. See Fig. 1 for a schematic of different types of preference assessment.

    Fig. 1

    Fig. 1 Preference assessment decision chart ( https://www.appliedbehavioranalysis.com/preference-assessments/ ).

    The second method—asking caregivers and observing the client—is often used first to record a list of potential reinforcers that are then tested in preference assessments. Recording how often a client engages with items relative to other items allows them to be considered low-, moderate-, or high-preference items; such designations are referred to as a preference hierarchy. Preference hierarchies may be used to isolate high-preference items for intensive teaching or independent responses, while moderate-preference items are used for solitary play and prompted responses. Preference assessment results are often reported with bar graphs with the items on the x-axis and the percentage of trials selected on the y-axis (see Fig. 2). In terms of predictive validity, research has shown that items selected in a preference assessment usually function as reinforcers (Curiel, Curiel, & Poling, 2021; Hagopian, Rush, Lewin, & Long, 2001; Kang et al., 2013; Kodak, Fisher, Kelley, & Kisamore, 2009; Lanner, Nichols, Field, Hanson, & Zane, 2010; Piazza, Fisher, Hagopian, Bowman, & Toole, 1996).

    Fig. 2

    Fig. 2 Graph of a preference assessment.

    Another use of preference assessments is measuring social validity. When clients cannot verbally express preferences for interventions, therapists, work sites, or living arrangements, they may be given choices of these items and situations. This is a more objective and reliable method of assessing social validity (observing selection behaviors) compared to responding to questionnaires and interviews (verbal report). Providing choices is particularly important when helping clients plan for the transition from school to adult life to ensure they are involved in selecting vocational tasks, leisure items, social situations, and living arrangements. In this chapter, we describe using preference assessments to identify reinforcers and measure social validity, as well as other applications.

    Preference assessments for identifying reinforcers

    Major types of preference assessment

    Six types of preference assessments have been used to identify preferred stimuli, described as (a) single stimulus, (b) paired stimulus, (c) multiple stimuli with replacement, (d) multiple stimuli without replacement, (e) free operant, and (f) restricted response. Many of these assessments are conducted in a trial-based format. Prior to implementing these assessments, a therapist gathers of pool of 5 to 8 items derived from interviews with caregivers and observations of the client. These items might be foods, drinks, toys, or any items that appear to function as reinforcers. A therapist may ask a parent to complete the Reinforcer Assessment for Individuals with Severe Disabilities (RAISD; Fisher, Piazza, Bowman, & Amari, 1996) or a questionnaire asking about potential reinforcers across several senses (e.g., taste, touch; Fig. 3). We describe the unique characteristics of each type of preference assessment and considerations for conducting them.

    Fig. 3

    Fig. 3 The Reinforcement Assessment for Individuals with Severe Disabilities (RAISD).

    To implement the single stimulus preference assessment (SS; Hagopian et al., 2001; Pace, Ivancic, Edwards, Iwata, & Page, 1985), the therapist presents one item at a time in a trial-based format and observes the client’s response, which may be reaching for or looking at the item. The therapist rotates through a variety of stimuli, presenting each one several times and allowing the client to briefly (e.g., 30 s) consume or engage with the item. Stimuli selected in a high proportion of opportunities or for the longest duration are considered high preference (Kodak et al., 2009). Unlike other types of preference assessment, the SS does not require the client to scan an array and choose from multiple items. However, for this reason, this method may not produce a preference hierarchy if multiple items are selected in a high proportion of opportunities, that is, all items may appear highly preferred.

    Like the SS, the paired stimulus preference assessment (PS; Fisher et al., 1992; Paclawskyj & Vollmer, 1995), also known as the forced-choice or paired-choice preference assessment, is implemented in a trial-based format with two items presented simultaneously and the instruction to pick one. The therapist presents pairs of items in all possible combinations, and each pairing may be assessed multiple times, resulting in many trials. For example, including 8 items in a PS in all possible pairings with each item paired with each other item in both positions (right and left) results in 56 trials. Preference is determined by calculating the proportion of opportunities an item was selected when it was available. See Fig. 4 for a sample PS data sheet. Due to the need to choose between two items each trial, the PS results in a preference hierarchy. However, given the repeated testing of items paired with all other items, the PS often takes longer to implement than the SS and other methods.

    Fig. 4

    Fig. 4 Sample data sheet for the paired stimulus preference assessment (PS) ( https://ebip.vkcsites.org/wp-content/uploads/2016/03/EBIP_Paired-Stimulus_Data-Sheet_4-Items.pdf ).

    Two preference assessment strategies involve presenting several (e.g., 5–8) items in an array each trial. To implement the multiple stimuli with replacement preference assessment (MS; Keen & Pennell, 2010; Windsor, Piché, & Locke, 1994), the therapist arranges items in a line in front of the client. With a large array, it is important to point to all items and ensure the client looks at them. After the client selects an item, consumes it, or briefly (e.g., 30 s) engages with it, the therapist places the item back in the array, rearranges the order of items, and begins a new trial. On each subsequent trial, the client may choose from all the original items. Like the PS, the MS provides a measure of relative preference, but the MS requires fewer trials. However, because each item is available every trial, the client may choose only the most preferred item(s), resulting in the incorrect assumption that the other items do not function as reinforcers when they might (i.e., false negatives). For example, in an MS with cookies, chips, and candy, if the client chooses candy in every trial, one might incorrectly conclude that cookies and chips are not reinforcers.

    To overcome this limitation, DeLeon and Iwata (1996) suggested not replacing each chosen item in the next trial, a method called the multiple stimulus without replacement preference assessment (MSWO; DeLeon & Iwata; Richman, Barnard-Brak, Abby, & Grubb, 2016). To implement the MSWO, the therapist presents all items, and the client selects one. After the client consumes or briefly engages with the item, the therapist leaves that item out of the subsequent trials and rearranges the order of the remaining items. This process continues until there are no items remaining or no items are chosen (see Fig. 5). A common way to score an MSWO is to assign points to the item selected each trial (Ciccone, Graff, & Ahearn, 2005). For example, when assessing five items, the first item chosen receives a score of five points; the second item chosen receives a score of four points; and so on. After implementing the MSWO with five items five times, points for each item are summed to determine the preference hierarchy. The MS and MSWO are efficient methods of assessing preference, though the MSWO is more likely to identify multiple preferred items. Unlike the SS and PS, the MS and MSWO require a client to scan and choose from an array.

    Fig. 5

    Fig. 5 Sample data sheet for the multiple stimulus without replacement preference assessment (MSWO) ( https://ebip.vkcsites.org/wp-content/uploads/2016/03/EBIP_MSWO_Data-Sheet_5-items.pdf ).

    A preference assessment method that does not use trials is the free operant preference assessment (FO; Clay, Schmitz, Clohisy, Haider, & Kahng, 2021; Roane, Vollmer, Ringdahl, & Marcus, 1998). To implement an FO, the therapist puts all items on a table or in a play or leisure area. During a brief session (e.g., 5 min), the therapist records the duration the client engages with each item. Items engaged with for longer durations are rendered most preferred (see Fig. 6). The FO is efficient if sessions are short, and the FO may produce a preference hierarchy as items are concurrently available and the client must choose between them. However, like the SS, the client may interact exclusively with the most preferred item(s), which may result in false negatives. The FO is particularly useful for assessing long-duration activities such as video games (Kodak et al., 2009). Attention-maintained problem behavior may occur during FOs as attention is withheld. However, compared to the trial-based methods, the FO is less likely to evoke problem behavior as there are no demands to choose an item and no removal of reinforcers (Tung, Donaldson, & Kahng, 2017; Verriden & Roscoe, 2016).

    Fig. 6

    Fig. 6 Sample data sheet for the free operant preference assessment (FO) ( https://ebip.vkcsites.org/wp-content/uploads/2016/03/EBIP_Free-Operant_Data-Sheet.pdf ).

    The final type of preference assessment is the response restriction preference assessment (RR; Boyle et al., 2019; Hanley, Iwata, Lindberg, & Conners, 2003), which combines elements of the MSWO and FO. On each trial, which is 3 to 5 min, the therapist places several items in front of the client and tells them to play with whatever they like. Similar to the FO, items are not removed and the therapist records the duration of engagement with each item. Then, like the MSWO, the therapist removes the item that was engaged with the most and re-presents the remaining items for the next trial. Boyle et al. found that the RR was more likely than the FO to produce a preference hierarchy, but the RR took more time.

    Choosing a type of preference assessment

    Research has shown that all six types of preference assessment are effective in identifying positive reinforcers and each type has pros and cons (see Fig. 7 for a summary). Therefore, it may be challenging to choose a format because there are no published recommendations for matching assessment methods to types of clients. One rule of thumb is that the MSWO and PS are the most reliable methods although the PS takes more time (Kang et al., 2013). Some researchers have offered decision-making models for choosing a type of preference assessment. For example, Karsten et al. (2011) suggested starting with an MSWO and progressing through alternate preference assessment types based on certain outcomes (see Fig. 8). Similarly, if a client engages in problem behavior, the therapist may switch to an FO, or if there is a position bias, the therapist may use an SS or present items closer together in a small container.

    Fig. 7

    Fig. 7 Assets and potential barriers of the major types of preference assessment ( Karsten, Carr, & Lepper, 2011, p. 350 ).

    Fig. 8

    Fig. 8 Decision-making model ( Karsten et al., 2011, p. 354 ).

    Virués-Ortega et al.’s (2014) decision-making model assists therapists in selecting a preference assessment by asking a series of questions about prerequisite skills, time constraints, problem behavior, preference hierarchy, and long-duration reinforcers (see Fig. 9). Most recently, the model by Lill, Shriver, and Allen (2021) guides the therapist to arrive at multiple assessment format options (see Fig. 10). Agreement among results of each option and efficiency are provided. This model involves preassessment considerations such as item selection (e.g., same class, equal portion) and motivational variables (i.e., restricting access to items 15 min prior to assessment). Given the absence of conclusive experimental evidence classifying assessments in relation to participant characteristics, these decision-making models may assist in individualizing assessments for clients.

    Fig. 9

    Fig. 9 Decision-making model ( Virués-Ortega et al., 2014 ).

    Fig. 10

    Fig. 10 Decision-making model ( Lill et al., 2021, p. 1147 ).

    Deciding how often to conduct preference assessments

    After selecting a type of preference assessment, a critical consideration is that preference is usually not stable. In other words, what functions as a reinforcer at one moment may not function as a reinforcer the next moment. This outcome occurs because reinforcers change in effectiveness based on motivating operations (MO), defined as (a) antecedent events or conditions that temporarily increase (establishing operation; EO) or decrease (abolishing operation; AO) the value of a stimulus as a reinforcer (value-altering effect) and (b) increase (EO) or decrease (AO) the likelihood of engaging in behavior that has produced that stimulus in the past (behavior-altering effect; Michael & Miguel, 2020). For example, if a child has not eaten for a while, there may be an EO for food; if a child wants to color and is given paper but no marker, there may be an EO for a marker. A therapist might offer a Mr. Potatohead toy with each piece functioning as a reinforcer for inserting it, but when another child starts playing with a marble run game, the EO might change from the Mr. Potatohead toy to the marble run game. A child may have an EO for salty popcorn, and after consuming 20 pieces might have an AO for popcorn and an EO for juice.

    The preceding discussion suggests that if a therapist conducts a preference assessment on Monday at 9:00 am and the most-selected item is a toy truck, there was an EO for the truck at 9:00 am but the EO may be gone at 10:00 am (i.e., an AO for the truck). Gottschalk, Libby, and Graff (2000) demonstrated the effects of MOs on preference assessments among young clients with developmental disabilities, first with a PS to identify four moderately preferred edibles. In the EO condition, the clients had no access to the items for 24 or 48 h. In the AO condition, the clients had access to the items during the 24 h prior to the session as well as 10 min prior to the session. The clients approached the items in the EO condition more often than in the AO condition.

    Chappell, Graff, Libby, and Ahearn (2009) extended Gottschalk et al. (2000) with three conditions: access immediately before the session, 10 min of deprivation, and 20 min of deprivation. Two of three participants showed higher preferences for items in the 20-min deprivation condition compared to the immediate and 10-min deprivation conditions. These results highlight why MOs need to be taken into account when conducting preference assessments. As preference assessment results are often not static, the relevant question for therapists should be: What items are preferred or not preferred under certain MO conditions, or right now? Additionally, results of preference assessments conducted months apart are unlikely to be consistent (MacNaul, Cividini, Wilson, & Di Paola, 2021). A way to frequently assess EOs for potential reinforcers is to conduct mini-preference assessments prior to teaching sessions, which may be structured (e.g., PS, MSWO) or unstructured (e.g., FO). If the same items are offered for many weeks, the child may lose interest in those items (i.e., an AO), thus it is recommended to increase the range of potential reinforcers and continually identify new reinforcers (Barbera, 2007).

    Conducting preference assessments efficiently

    Given the intensive needs of most clients receiving ABA services as well as the need to conduct preference assessments regularly due to changing MOs, procedures must be implemented efficiently. While retaining the predictive validity of selected items functioning as reinforcers, researchers have provided two types of adaptations for conducting preference assessments quickly. First, alter the number of stimulus presentations or time during assessment. For example, Clay et al. (2021) reduced FO sessions from 5 min to 1 min. Similarly, whereas the common method of implementing an MSWO is presenting the entire array five times, a more efficient method is presenting the entire array only once (Richman et al., 2016; Tullis, Cannella-Malone, & Fleming, 2012). Presenting an MSWO array once or twice, compared to three times, may yield the same hierarchy but not the same high-preference item (Conine et al., 2021), though this is not a concern if the high-preference item functions as a reinforcer.

    Second, the types of items presented may be altered, particularly in settings where certain items are not readily available for sampling or cumbersome to deliver repeatedly such as a preferred teacher and playing basketball. A solution to this challenge is to leverage representational forms of stimuli like pictures (Graff & Gibson, 2003) or video clips (Curiel, Curiel, Adame, & Li, 2020; Snyder, Higbee, & Dayton, 2012), perhaps displayed on a computer or tablet (Brodhead et al., 2016). Curiel et al. offered a digital tool that may be helpful in modifying stimuli using videos (https://mswopat.utrgv.edu). When using representational stimuli, it is important to ensure that clients can match items to pictures or videos (Clevenger & Graff, 2005) or can be taught this response. Using pictures to assess preference may be particularly efficient if the client selects a picture but does not engage with that reinforcer (Brodhead, Kim, & Rispoli, 2019; Groskreutz & Graff, 2009), though this may evoke problem behavior (Davis et al., 2010; Kang et al., 2011).

    Preference assessments for nontangible items

    Another role of pictures and videos is to assess social and other nontangible items (Wolfe, Kunnavatana, & Shoemaker, 2018). Clay, Samaha, Bloom, Bogoev, and Boyle (2013) used a PS to offer choices of therapists who provided unique types of social interaction (e.g., tickles, head rubs, high fives). Morris and Vollmer (2019) developed the social interaction preference assessment (SIPA), which combines aspects of the MSWO and RR. For five trials each session, the therapist presents pictures of types of social interaction, and if an item is selected at least 80% of trials across two sessions, it is removed for the following sessions.

    Morris and Vollmer (2020a, 2020b) further evaluated the validity of the SIPA and how it compared to the MSWO and vocal PS. They found that compared to low preference social interactions, high preference interactions identified by the SIPA were most effective as reinforcers during teaching sessions. When comparing the SIPA to the MSWO and vocal PS (e.g., do you want X or Y?), the MSWO and SIPA produced valid outcomes for all participants, and the vocal PS produced valid outcomes for the verbal participants. The MSWO was more efficient than the SIPA, and the SIPA produced more valid results for clients with limited matching and tacting repertoires.

    Other examples of nontangible reinforcers are sounds (e.g., music) and smells (e.g., perfumes), which may be assessed using a PS. For example, Horrocks and Higbee (2008) presented two portable CD players that each played a different song. After briefly sampling each song, the client chose a song by pointing to one of the CD players. Wilder et al. (2008) held two air fresheners (one at a time) up to the client’s nose for several seconds and then asked them to choose one. Saunders and Saunders (2011) conducted preference assessments with nonambulatory adults with profound intellectual and sensory disabilities who activated adaptive switches to access auditory (e.g., music), tactile (e.g., vibration), visual (e.g., strobe light), and olfactory (e.g., defuser) stimulation. Clients with severe, multiple disabilities may also use eye gaze to indicate preference (Cannella, Sabielny, & Tullis, 2015).

    Accounting for cultural differences

    Behavior analysts must consider how the cultural and linguistic background of each client may impact preference for potential reinforcers (Fong, Catagnus, Brodhead, Quigley, & Field, 2016). Given projections that the foreign-born population will grow from 13% (in 2016) to 19% by 2060 (United States Census Bureau, 2015), behavior analysts are serving more racially, culturally, and linguistically diverse clients. In ABA, cultural responsiveness and cultural humility must be strongly woven into the work of serving clients (BACB Ethics Code, 2020, Code 1.07) and training behavior analysts (Code 4.07).

    Food items are often used as reinforcers in behavioral programming, thus there are several considerations when suggesting or including food items in preference assessments. For example, Italian families may prefer dairy-based snacks (e.g., cheese) while Asian families may favor wheat-based cuisine (e.g., noodles, dumplings). Additionally, among families with incomes at or below the poverty line, it is important to exercise caution when suggesting food items that may be prohibitively expensive (Wright, 2019) and where food insecurity (i.e., lacking consistent access to enough food for active and healthy living; Tucker, Davis, Perez, Klein, & D’Amico, 2022, p. 737) is a concern (Beaulieu & Jimenez, 2022; Dennison et al., 2019).

    Resetar Volz and Cook (2009) recommended that schools and organizations serving verbal individuals such as adolescents diagnosed with emotional disturbance conduct surveys to assess preferences. They analyzed the results of 313 survey respondents at a residential school for children and adults and reported For the item outing to fast food, African American youth rated it as significantly more preferred than both Caucasian and Other youth. Results revealed that Caucasian youth, on the other hand, were significantly more likely to rate outing to nice restaurant, playing video games, and playing outdoors as a preferred activity than African American and Other youth (p. 787). Though these results were correlational with a weak-to-moderate effect size (0.2) and there are likely other contributing factors, a survey approach may be indicated for determining preferred items among diverse individuals residing in large groups.

    Cultural and linguistic backgrounds not only impact the selection of food items. In a study in Italy with 16 children with autism, Slanzi, Graziano, D’Angelo, Vollmer, and Conine (2020) found that screen-based technology devices (e.g., iPads) were selected in lower percentages than in a similar study conducted in the US. Slanzi et al. speculated that this was because, in contrast to programs in the US, the Italian children did not use screen-based devices as alternative communication devices and there was only one device in each child’s classroom. Slanzi et al. suggested that because Italian mothers are more affectionate than their American counterparts (p. 2437), the social interactions that came with playing with toys may have been more preferred than isolated play with a device. These studies underscore the need to identify potential reinforcers that align with cultural differences.

    The best way of exhibiting cultural humility (Kirby, Spencer, & Spiker, 2022) is including parents in the selection of potential reinforcers (Čolić, Araiba, Lovelace, & Dababnah, 2021; Deochand & Costello, 2022). To do this, behavior analysts may use open-ended interviews (Hanley, 2012) or culturally sensitive assessment tools (Moreno, Wong-Lo, & Bullock, 2014) to garner information from families regarding how their cultural background and preferences affect the selection of potential reinforcers for their child. As therapists become more sensitive to their clients’ needs, levels of mutual trust between therapists and parents will increase (Berlin & Fowkes, 1983). Although therapists do not need to speak their clients’ languages, familiarizing themselves with relevant cultural contexts, family virtues, and religious preferences will help establish rapport (Castillo, Quintana, & Zamarripa, 2000; Martinez & Mahoney, 2022). If language barriers arise, therapists should consider collaborating with certified interpreters who share cultural and linguistic backgrounds with the client (Dowdy, Obidimalor, Tincani, & Travers, 2021). Finally, when working with clients living in non-English-speaking homes, therapists may use preference assessments to allow them to select their preferred language for instruction (Aguilar, Chan, White, & Fragale, 2017).

    In summary, we described six types of preference assessment and provided research-based considerations for selecting a type, accounting for MOs, conducting assessments efficiently, assessing nontangible reinforcers, and exercising cultural humility when selecting potential reinforcers. We now explain how preference assessments can be used to assess social validity and preferred environments.

    Preference as social validity

    Social validity is the acceptability of the goals, procedures, and outcomes of an intervention program by direct and indirect consumers (Wolf, 1978). Rather than asking people what procedures they prefer, behavior analysts can arrange multiple procedures in a preference assessment. If this is not possible, an alternative assessment of preference and social validity is the extent to which the client displays happiness. These types of assessments of social validity are particularly important when planning a client’s transition from school to adult life.

    Assessing preference for interventions

    Preference assessments may be used to allow clients to choose the interventions they receive. A straightforward process with verbal clients who can answer questions about preferences is using dialogue or questionnaires. Nonverbal clients must also have ways to express their opinions and be empowered with self-determination (Wehmeyer, 2020). Using this interpretation of preference assessment will equip more stakeholders to improve clients’ quality of life (Schwartz & Kelly, 2021) and self-advocacy skills.

    Hanley (2010) used preference assessments as an objective measurement of social validity (p. 13) to allow clients to choose their preferred interventions. For example, consider a client with problem behavior reinforced by attention. Two potential interventions are functional communication training (FCT) and noncontingent reinforcement (NCR). A behavior analyst could test both interventions with the following addition: when the behavior analyst uses FCT, they have the client touch a red card, and when using NCR, the client touches a blue card. Then, after many sessions indicating that both interventions are effective, the behavior analyst offers the client a choice of interventions by holding up the red and blue cards and allowing the client to choose one.

    The concurrent operants arrangement presented by Hanley (2010) has been used to allow clients to choose many interventions, including choosing between forward and backward chaining (Slocum & Tiger, 2011), interdependent and independent group contingencies (Groves & Austin, 2017), and book- and tablet-based picture activity schedules (Giles & Markham, 2017). Clients may also choose from videos displaying interventions (Huntington & Schwartz, 2021). In addition to the many benefits addressed above, letting clients and families choose their interventions is a way to practice cultural humility and compassionate care (Taylor, LeBlanc, & Nosik, 2019). A behavior analyst should be aware of how their own culture and personal biases lead to selecting interventions, and, rather, ensure that interventions align with the client’s culture (Slim & Celiberti, 2022). Engaging in two-way communication may avoid erroneous assumptions (Kalyanpur & Harry, 2012), and providing choices of interventions sends the message that the behavior analyst is adopting the client’s cultural perspective and continually seeking input.

    Indices of happiness

    When it is difficult to allow clients to choose interventions, large-scale environments, and living arrangements, behavior analysts may use more descriptive methods to assess preference, such as measuring indices of happiness (Parsons, Reid, Bentley, Inman, & Lattimore, 2012; Tullis & Seaman-Tullis, 2019). When environmental arrangements are highly preferred, people usually respond in a way that would be termed happy. Alternatively, when less preferred conditions are present, people often respond in a neutral or unhappy manner.

    Parsons et al. (2012) outlined a systematic approach for defining happiness and unhappiness with non- or minimally verbal adult clients. First, they asked caregivers to indicate the indices (i.e., topographies) of happiness and unhappiness for each client. Examples of individually defined happiness were laughing, smiling, patting leg, and running; examples of individually defined unhappiness were hitting head, crying, pressing finger on eye, biting hand, frowning, and tipping over furniture. The caregivers also indicated situations in which the clients were happy (e.g., drawing, the lounge, leisure time) and unhappy (e.g., no activity, reading). Second, the researchers verified that indices of happiness occurred in the happy situations, and vice versa. Finally, the researchers used a PS to allow the clients to choose the happy or unhappy situations and they all chose the happy situations. Although this type of descriptive assessment has limitations related to precision, it may be a useful method in some contexts.

    Using preference assessments to capture social validity is a promising clinical practice, but some caution should be taken. First, it is important to incorporate additional variables needed to validate a client’s preference, such as the efficacy of an intervention. For example, if a client selects noncontingent reinforcement (NCR) when the procedure does not reduce a problem behavior, the preference assessment lacks validity. Second, when assessing social validity according to the guidelines provided by Wolf (1978), there may be conflicts between stakeholders or shifts in acceptability with changing environments. Steps should be taken to ensure the client remains at the center of the process.

    Preference assessment in transition services

    Peterson, Aljadeff-Abergel, Eldridge, VanderWeele, and Acker (2021) wrote that An individual is considered ‘self-determined’ when he/she makes his/her own choices about what to eat and when, where to live and with whom, what to wear each day, what to eat each day, what to do to earn money (or to stay at home and eat bonbons), where to go to school, etc. (p. 301). As clients transition from school to adult life, it is critical to assess their acceptability of new environments. Transition services are a coordinated set of assessments, goal development, and skill acquisition for secondary students with disabilities to prepare for postschool environments (Kochhar-Bryant, Bassett, & Webb, 2009). These programs prepare students across the domains of employment, social and leisure activities, and living settings. Using preference assessments during transition services can ensure the non- or minimally verbal client is fully engaged in the process and making choices with respect to these three domains (Lohrmann-O’Rourke & Gomez, 2001; Tullis & Seaman-Tullis, 2019). Transition-based preference assessments may be direct (e.g., MSWO) or descriptive (e.g., indices of happiness).

    In terms of employment, behavior analysts may assess preference for the same elements that are relevant for people without disabilities (Lent, Brown, & Hackett, 2000) such as location, break conditions, work times, and reinforcers for task completion (Ninci, Gerow, Rispoli, & Boles, 2017). Reid et al. (2007) used both an MSWO and a PS with 12 adults who had severe disabilities by presenting items corresponding to work tasks (e.g., stamps for stamping envelopes), asking the clients to choose one, having the client engage in that work task for three minutes, presenting a choice of the remaining items, and so on. Worsdell, Iwata, and Wallace (2002) determined preference of vocational tasks (e.g., folding towels) by recording the duration of engagement with each task. Additionally, behavior analysts may use indices of happiness to determine if a work location or work shift is preferred or nonpreferred. These indices could be augmented by verifying preference using metrics related to time engaged with job-related items or frequencies of breaks.

    Although research on assessing preference for social and leisure activities is limited, Call, Trosclair-Lasserre, Findley, Reavis, and Shillingsburg (2012) used a concurrent operants preference assessment to assess the functional properties of social interactions, specifically where social interaction takes place, the theme of social interaction (e.g., playing Dungeons and Dragons, attending a sporting event), and the duration of the interactions. One participant indicated that social interaction was highly preferred with the remaining participants indicating a neutral preference. These data highlight the necessity of not solely assessing social stimuli but also assessing the client’s preference for the level or nature of social interactions.

    Choice of living arrangement may be one of the least investigated areas of the transition planning process and the most difficult to assess. The UN Convention on the Rights of Persons with Disabilities (United Nations, 2006) supports the assertion that choice of living arrangement is a basic right. Preference for living arrangement may be best described in terms of quality of life (Stancliffe & Keane, 2000), which has been an understudied construct (van Heijst & Geurts, 2015). Generally, preference, or the extent to which preference is incorporated, is core to the concept of quality of life (Schwartz & Kelly, 2021). In supported or independent living settings, typical measures of preference may be less appropriate than more descriptive forms (e.g., indices of happiness). As with other aspects of transition, these measures should be augmented by other observations to confirm preference. For example, a behavior analyst could correlate a goal of being happy living with a roommate with measures of how often the client and roommate interact or are in the same room. Allowing a client to express their preference for a living environment provides them with a rich living experience.

    In summary, we described using preference assessments to allow clients to choose interventions and how behavior analysts can use indices of happiness to identify preferred environments for clients. We discussed how to assess preference in the context of transition services, particularly employment, social and leisure activities, and living settings. In a final section, we describe additional applications of preference assessments, specifically with additional populations and training people to conduct preference assessments.

    Additional applications of preference assessments

    Preference assessments have been conducted across many populations and the lifespan, from the ages of 13 months (Rush, Kurtz, Lieblein, & Chin, 2005) to 95 years (Feliciano, Steers, Elite-Marcandonatou, McLane, & Areán, 2009). General education students (Schanding Jr., Tingstrom, & Sterling-Turner, 2009) and students with or at risk for emotional disturbance (King & Kostewicz, 2014) have benefited from preference assessments. In one study, Paramore and Higbee (2005) conducted MSWOs with food items with adolescents with emotional disturbance, and those items were then used to reinforce on-task behavior. With elementary students at risk for emotional disturbance, King (2016) compared MSWOs and verbally indicating preferred items and found that for one student, the item selected most often in the MSWO was superior in terms of increasing the completion of academic tasks. With adults diagnosed with schizophrenia, Wilder, Ellsworth, White, and Schock (2003) and Wilder, Wilson, Ellsworth, and Heering (2003) found similar results of PSs and verbal indications of preference. Reyes, Vollmer, and Hall (2017) conducted preference assessments with sex offenders with intellectual disability as part of a broader assessment of the likelihood of reoffending. Finally, Raetz, LeBlanc, Baker, and Hilton (2013) found that results of MSWOs were stable in 5 out of 7 adults with dementia.

    Preference assessments have also been conducted in the context of organizational behavior management (OBM; e.g., Wine, Reis, & Hantula, 2014). Simonian, Brand, Mason, Heinicke, and Luoma (2020) reviewed 12 studies that evaluated preference assessments in a variety of organizations and identified money (up to $10), gift cards, snacks, breaks, choice of work tasks, office supplies, and praise/recognition as potential reinforcers. About half of the studies used a PS or MSWO format, and the other studies used surveys or other indirect methods.

    Training people to conduct preference assessments

    In the last decade, over 15 studies have been published on methods to teach staff, teachers, and parents to conduct preference assessments. The most common procedure is behavioral skills training (BST), consisting of instructions, modeling, role play, and feedback (Lavie & Sturmey, 2002; O’Handley, Pearson, Taylor, & Congdon, 2021). Other effective procedures are the feedback sandwich (positive-constructive-positive; Bottini & Gillis, 2021a) and video modeling, often with written or voice-over instructions (Delli Bovi, Vladescu, DeBar, Carroll, & Sarokoff, 2017; Rosales, Gongola, & Homlitas, 2015; Vladescu et al., 2021). In addition, researchers have validated online training (Bottini & Gillis, 2021b), self-instruction (Shapiro, Kazemi, Pogosjana, Rios, & Mendoza, 2016; Wishnowski, Yu, Pear, Chand, & Saltel, 2018), and telehealth (Higgins, Luczynski, Carroll, Fisher, & Mudford, 2017). This rich body of research indicates that once a behavior analyst chooses a type of preference assessment for a client, there are a host of available and effective procedures for training staff and others to conduct it.

    Chapter summary

    Preference assessments have several purposes and copious research support. To identify reinforcers for behavioral programming, behavior analysts may choose from six types of preference assessment: SS, PS, MS, MSWO, FO, and RR. Research generally favors the PS and MSWO, though the FO is useful for reducing the likelihood of problem behavior and assessing long-duration reinforcers. It is critical to account for MOs when conducting preference assessments as preferences and reinforcer efficacies may vary from moment to moment. There are several ways to make preference assessments more efficient by reducing trials and session duration. In addition, research supports conducting preference assessments with nontangible items such as social interactions and olfactory stimuli.

    Behavior analysts should incorporate cultural humility into selecting and testing potential reinforcers, as there are many racial, linguistic, and socioeconomic status variables that may affect the types of reinforcers that are acceptable to families. Including parents and guardians in selecting potential reinforcers is essential. Continually learning about each family’s culture will increase the chances of using acceptable

    Enjoying the preview?
    Page 1 of 1