arrow left
arrow right
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
  • Doordash, Inc., Grubhub Inc. v. New York City Department Of Consumer And Worker Protection, Vilda Vera Mayuga , in her official capacity as Commissioner of the New York City Department of Consumer and Worker ProtectionSpecial Proceedings - CPLR Article 78 document preview
						
                                

Preview

FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 EXHIBIT 1 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 Expert Report of Jonah Berger, Ph.D. July 5, 2023 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 I. Background and Assignment 1. On November 16, 2022, the New York City Department of Consumer and Worker Protection ( "DCWP") published a proposed rule in the City Record to implement Local Law 115 of2021 (the "Proposed Rule"). 1 According to DCWP, "Local Law 115 of2021 charged the Department with studying [food delivery workers who work with apps as independent contractors] and developing an appropriate minimum pay rate to ensure adequate compensation for these workers." 2 Also in November 2022, DCWP published a report entitled "A Minimum Pay Rate for App-Based Restaurant Delivery Workers in NYC" that discussed, among other things, DCWP' s "proposed rule to establish a minimum pay rate" for food delivery work, as well as three different surveys that DCWP conducted, including "an online survey distributed to 123,000 workers who performed deliveries in NYC in the fourth quarter of202 l " (the "NYC Delivery Worker Survey").3 2. DCWP states that it conducted the surveys in order to gain information about NYC delivery workers' expenses, demographics, safety conditions, work history, and experiences with discipline and non-payment on food delivery apps as well as NYC restaurant owners and managers' delivery experiences. 4 Information obtained from these surveys formed, in part, the basis for DCWP' s Proposed Rule establishing a minimum pay rate for NYC delivery workers. 5 3. On December 16, 2022, Dr. Itamar Simonson submitted a report (the "Simonson Report") before DCWP regarding the Proposed Rule. 6 In this report, Dr. Simonson evaluated the three surveys that DCWP conducted. Dr. Simonson opined that these surveys "were biased and were bound to produce unreliable results" because they "violated fundamental principles of survey design."7 For instance, Dr. Simonson finds that the survey preamble explicitly told respondents that the purpose of the survey was to raise pay for app delivery workers, which in 1 New York City Department of Consumer and Worker Protection, ''Notice of Adoption of Final Rule" ("Notice of Adoption of Final Rule"), p. 1. 2 Notice of Adoption of Final Rule, p. 1. 3 Notice of Adoption ofFinal Rule, p. 22; New York City Department of Consumer and Worker Protection, "A Minimum Pay Rate for App-Based Restaurant Delivery Workers in NYC," November 2022 (the "Minimum Pay Rate Report"), p. ii. 4 Minimum Pay Rate Report, pp. 2- 3. 5 Minimwn Pay Rate Report, p. ii. 6 Expert Report of Dr. Itamar Simonson Re: Proposed Rule on Minimum Pay for Food Delivery Workers, December 16, 2022. 7 Simonson Report, 1. Page 1 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 turn led responses to exaggerate their estimates of expenses. 8 As such, Dr. Simonson determined that "it would not be consistent with accepted survey science to rely upon [the surveys'] results for use in justifying the Proposed Rule."9 4. DCWP subsequently issued a "Notice of Adoption of Final Rule," which included, among other things, purported responses to Dr. Simonson's commentary. 10 DCWP also clarified that, while it relied on data from its NYC Delivery Worker Survey in developing the proposed minimum pay rate, it did not use data from the other two surveys for this purpose. 11 5. I have been asked by counsel for DoorDash, Inc. and Grubhub, Inc. to i) review the Simonson Report and offer my opinion on the flaws pertaining to DCWP's surveys identified therein; and ii) review DCWP's responses to the commentary in the Simonson Report to determine whether the flaws identified by Dr. Simonson have been resolved. JI. Summary of Opinions 6. I agree with the Simonson Report that DCWP's surveys suffer from numerous flaws that run counter to standard practices for designing and conducting surveys. Accordingly, DCWP's surveys are biased and unreliable. Among others, these flaws include (1) relying on an inappropriate and leading initial prompt, which should encourage demand effects; (2) multiple problems with question construction, including the use of leading/biased questions, focalism bias, and inappropriate use of closed-ended, rather than open-ended, questions; (3) failing to include control, (or "phantom") questions, or any other controls to account for guessing and demand effects; and (4) failing to validate survey responses. 7. DCWP's responses to Prof. Simonson's assertions do not resolve the flaws in the NYC Delivery Worker Survey, and, as a result, the survey remains unreliable. First, DCWP's response does not address the fact that, even if customary in other DCWP surveys, the prompt that appears at the beginning of the NYC Delivery Worker Survey invalidated the survey results in at least two ways: it biased the responses of those workers who participated ("Answer Bias"), and it biased the survey results as a whole by influencing who took the survey in the first place ("Participation Bias"). These biases threaten the survey's representativeness and make it invalid. Second, DCWP's finding that respondents had difficulty providing certain 8 Simonson Repmt, ir,[ 31-33. 9 Simonson Report, ,r 4. 10 Notice of Adoption of Final Rule, pp. 22-25. 11 Notice of Adoption of Final Rule, p. 22. Page 2 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 open-ended answers does not support using leading and inappropriate closed-ended answers, but instead suggests that respondents may have guessed and biased their responses. Third, DCWP provided no evidence for its claims that control questions and other validation techniques were not necessary, even when such practices are standard in survey design. Finally, DCWP failed to address that it did not attempt to verify certain responses to questions, for example, by asking for receipts to prove claimed purchases. III. Qualifications 8. I am a Marketing professor at the Wharton School at the University of Pennsylvania. I received my Ph.D. in Marketing from the Graduate School of Business at Stanford University and my B.A. in Human Judgment and Decision Making, also from Stanford University. 9. My research focuses on analyzing consumer behavior, word of mouth, and social influence, through the use of scientific techniques such as surveys and natural language processing. I have published over 70 articles in top-tier academic journals in marketing, consumer psychology, and other disciplines including in the Journal of Marketing, Journal of Marketing Research, Journal of Consumer Research, and Journal of Consumer Psychology. I have also written three best-selling books that have been printed in over 30 different languages. In addition, I serve as an Associate Editor at the Journal of Marketing. In my academic and other work I have conducted or evaluated over a thousand surveys. 10. I have received numerous awards for my research and writing. In 2014, I received the American Marketing Association's Leonard L. Berry Marketing Book Award, which is given for a book that has had a significant impact in marketing and related sub-fields. In 2017, I received the Journal of Marketing's William F. O'Dell Award for the article that made the most significant, long-term contribution to marketing theory, methodology, and/or practice. In 2023, I received the Sage 10-Year Impact Award for one of the three most impactful articles published in all Sage journals. I was recognized from 2008 to 2023 by the American Marketing Association as one of the most productive researchers in marketing. The American Management Association named me one of the top 30 leaders in business and Fast Company magazine named me one of the most creative people in business. 11. Jn addition to my research, I have extensive experience teaching and consulting on these topics. I have won numerous teaching awards from the Wharton School and have consulted with hundreds of companies, including many Fortune 500 companies and foundations. Page 3 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 12. A complete list of my publications and other academic and professional experience is in my curriculum vitae, which is attached to this report as Appendix A. IV. The Simonson Report Identified Numerous Flaws of DCWP's Surveys that Render the Surveys Unreliable 13. The Simonson Report concludes that DCWP' s surveys "violated fundamental principles of survey design" and identifies a number of flaws in the surveys.12 I agree with the Simonson Report that the surveys suffer from numerous flaws that run counter to standard practices for designing and conducting surveys. Accordingly, DCWP's surveys are biased and unreliable. I briefly summarize the critical flaws identified in the Simonson Report in the following paragraphs. 14. Flaw # 1: The Surveys did not avoid "Demand Effects," which are sometimes referred to as "Demand Artifacts." 13 Demand effects refers to a phenomenon that leads survey respondents to form an interpretation of the survey's purpose, and based on that, respond to the questions in a manner that they believe the researcher wants, or that reflects the respondent's self-interest. Here, for example, DCWP told the survey respondents that the purpose of the sw-vey was to raise pay for app delivery workers. 14 According to the Simonson Report, demand effects affected DCWP' s surveys and " [t]his fatal flaw, by itself, makes the surveys' results unreliable." 15 15. Flaw #2: The Surveys suffered from multiple problems with question construction, including the use of leading/biased questions, introduction of focalism bias, and inappropriate use of closed-ended, rather than open-ended, questions. 16 When designing a survey, it is important for questions to be objective, rather than to lead respondents toward a desired response. As the Simonson Report correctly notes, leading questions can have a particularly strong biasing effect when (as in DCWP's surveys), questions pertain to a respondent' s self- reporting of their behaviors or experiences.17 For example, the Simonson Report identified the question "How many hours per week is your car in use for something other than app delivery?" 12 Simonson Report, ,i l. 13 Simonson Report, ,i 2a. 14 Simonson Report, ,i 32. 15 Simonson Report, ,i 30. 16 Simonson Report, ,i 26, 2d, 2e. 17 Simonson Report, ,i 18. Page 4 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 as leading because "the survey's introduction encouraged respondents to indicate that their car and associated expenses are almost entirely attributed to their food delivery work." 18 16. Further, DCWP's surveys introduce focalism because they direct respondents to consider items that respondents would not consider had the item not been singled out. 19 Focalism is understood to be a cognitive bias in which the choice of a survey respondent is influenced by a piece of information that stands out in the context of a survey in a manner that does not reflect the real world. The Simonson Report correctly notes that focalism bias exists with respect to various expense categories provided in the Delivery Worker Survey. 2 ° For example, the Simonson Report identified the question "When you started the job, how much did you spend on buying delivery accessories like: GPS tracker, ... etc.?" as affected by focalism because it "suggested categories of expenses that might not have applied to respondents or that were purchased for additional reasons."21 17. Finally, the Simonson Report identifies that DCWP's surveys inappropriately rely on closed-ended questions when open-ended questions would be possible and preferable. The use of closed-ended questions is problematic in this case because the list that is provided to survey respondents may be understood to include "correct" answers. For example, the Simonson Report identified the question "During the past 12 months, how many batteries have you bought for your moped?," followed by options "O, 1, 2, 3, 4, 5 or more" as an inappropriate closed- ended question because the response options "suggested the expected response range."22 I agree with the Simonson Report in this regard because closed-ended responses in this case suggest a normative answer, or answer most respondents are likely to give. Consequently, a respondent who did not recall whether he purchased more than 1 battery, for example, might choose a response greater than 1 because the response options suggest that is what the correct answer is likely to be. The use of closed-ended questions is especially problematic when the list does not include responses such as "I don't know" or "I don't remember," or "None of the above." DCWP's surveys use many such inappropriate closed-ended questions, rendering the responses unreliable. 23 18 Simonson Report, 9i[ 38. 19 Simonson Report, 9i[ 23. 20 Simonson Report, 9i[ 38. 21 Simonson Report, 9i[ 35. 22 Simonson Report, 9i[ 36. 23 While some survey questions allowed respondents to skip providing an answer, most questions required an answer to allow the respondent to continue. See NYC Delivery Worker Survey Questionnaire. Page 5 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 18. Flaw #3: DCWP's surveys failed to include control (or "phantom'') questions, or any other controls to account for guessing and demand effects. The use of control questions is well-accepted by survey researchers as an effective method to minimize bias in survey responses. As the Simonson Report points out, an appropriate way to control for respondents' guessing about expenses they incurred would have been to include expense categories not related to food delivery work. 24 19. Flaw #4: DCWP's surveys Jailed to validate survey responses. While in some cases it may be difficult or impossible to validate survey responses, the Simonson Report particularly identifies that survey respondents could have been asked for purchase receipts as proof of claimed expenses. 25 V. The Flaws of The NYC Delivery Worker Survey Are Not Resolved Based on DCWP's Responses and, Accordingly, the Survey Remains Unreliable. 20. With respect to Flaw #1, DCWP responded as follows: It would not have been appropriate to conduct a survey without informing respondents that it was being conducted by the City of New York or informing respondents how their responses would be used. Such disclosures, which are customary, do not invalidate the survey results. Had the Department not provided appropriate disclosure, it is likely that participation would have been lower and less representative. 26 21. DCWP's response does not resolve Flaw #1 because it ignores the well-established result that revealing the sponsor and purpose of a survey can generate demand effects fatal to the survey' s reliability. 27 Even if an initial disclosure is customary in other surveys conducted by DCWP (something DCWP seems to claim without offering support), being "customary" is not an indicator ofreliability or scientific validity-especially when there is academic research to the contrary. 28 More importantly, DCWP did not simply inform respondents of who was conducting the survey and how their answers would be used. Rather, DCWP told respondents 24 Simonson Report, ,r 39. 25 Simonson Report, ,r 42e. 26 Notice of Adoption of Final Rule, p. 27 See Simonson Report, ,r 13, citing Diamond, S. S., "Reference Guide on Survey Research," in Reference Manual on Scientific Evidence, Third Edition, Washington, D.C.: The National Academies Press, Federal Judicial Center, 2011, 359--424 ("Diamond (2012))," p. 411. 28 Diamond (2012), p. 411; Diamond S.S and Jerre B. Swann, "Trademark and Deceptive Advertising Surveys," Second Edition, American Bar Association, p. 212. Page 6 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 that the survey's ultimate goal would be to raise pay for app delivery workers. 29 This prompt, which appears at the beginning of the NYC Delivery Worker Survey, invalidated the survey results in at least two ways: it biased the survey by leading the responses of those workers who participated in the survey ("Answer Bias"), and it biased the survey by affecting the propensity of a worker to participate in the survey ("Participation Bias"). 22. Answer Bias: As described in the Simonson Report, a great deal of academic research finds that self-interest can drive survey respondents to report incorrect information in their responses if they believe they stand to gain something from doing so. 30 In this case, the prompt provided at the beginning of the survey makes it clear how respondents stand to gain a higher minimwn pay rate from providing higher estimates of their expenses. At the same time, because respondents were told that their responses would be kept confidential, 31 there was a negligible risk that they could suffer negative consequences from providing an incorrect estimate. The combination of these two factors should encourage respondents to provide exaggerated estimates of the size and frequency of expenses and biases the survey. 23. Importantly, note that the answer bias caused by the initial survey prompt affected responses to the entire survey, regardless of whether a question asked for a monetary amount or some other estimate. For example, DCWP stated that it "used the survey to measure the frequency with which workers experience loss or theft of their e-bike, purchase replacement batteries or e-bike accessories, and buy and trade-in phones." 32 A respondent that stood to gain from a higher minimum pay rate would have incentives to report more frequent purchase of batteries, accessories, or phones because there would be a direct connection between the numbers he or she reported and the estimate of expenses that DCWP would derive from their responses (because DCWP told respondents as much). 24. Participation Bias: Researchers who use surveys should strive to ensure their survey samples are representative- the survey sample should accurately reflect the relevant characteristics of the target population. 33 When a survey includes responses from only a part of the selected sample, while systematically excluding another part of the sample, it is said that 29 NYC Delivery Worker Survey Questionnaire. 30 See Simonson Report, ,r 15. 31 NYC Delivery Worker Survey Questionnaire. 32 "Notice of Public Hearing and Opportunity to Comment on Proposed Rules," New York City Department of Consumer and Worker Protection. 33 Diamond (2011), p. 380. Page 7 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 the survey suffers from participation or "non-response" bias."34 When a survey suffers from participation bias it is difficult to draw valid inferences about the population from the survey sample. 35 25. Academic research shows that respondents who are aware that they may gain from a survey's result are more likely to participate in the survey than are respondents who are not aware of such gains.36 In this case, the initial survey prompt tells respondents how they stand to benefit from the outcome of the survey. Consequently, delivery workers who actually had higher or more frequent expenses would have more incentive to participate in the survey because they stood to gain more from a minimum pay rate that reflects hlgher expenses, relative to delivery workers with lower or less frequent expenses. This is a fatal flaw because it systematically increases the propensity of certain workers to participate in the survey (i. e., those with higher expenses), whlle it decreases the propensity of other workers to do so (i. e., those with lower expenses). DCWP' s inability to validate the representativeness of the NYC Delivery Worker Survey is a severe flaw. 26. While DCWP claims that it "used appropriate controls to ... address non-response 37 bias," the controls that DCWP used do not address the incentives of delivery workers to participate in the survey,38 and thus cannot correct the biases resulting from an unrepresentative sample. Moreover, because respondents with hlgher or more frequent expenses are more likely to want to participate in the survey to influence a pay rate that "reflects [their] expenses and needs,"39 the estimate of average expenses drawn from the survey's unrepresentative sample is likely to overestimate the true average expenses of delivery workers. 27. In the Notice of Adoption of Final Rule, DCWP claims that, had it not provided disclosures about the surveys' use, "it is likely that participation would have been lower and 34 Diamond (2011), p. 383 . 35 Diamond (2011), p. 383. 36 Amany Saleh and Krishna Bista (2017), "Examining Factors Impacting Online Survey Response Rates in Educational Research: Perceptions of Graduate Students," Journal of Multidisciplinary Evaluation, 13-29, pp. 63-74; Charles L. Martin (1994), "The impact of topic interest on mail survey response behavior," Journal of the Market Research Society, 36- 4. 37 "Notice of Public Hearing and Opportunity to Comment on Proposed Rules," New York City Department of Consumer and Worker Protection. 38 DCWP applied post-stratification weights based on the mix of apps used by the worker (Uber Eats only, DoorDash only, Grubhub only, Relay only, and multiple) and the quartile of hours worked. See "A Minimum Pay Rate for App-Based Restaurant Delivery Workers in NYC," New York City Department of Consumer and Worker Protection Report, p. 5. 39 NYC Delivery Worker Survey Questionnaire. Page 8 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 less representative.'"' 0 However, if DCWP was concerned about low participation in the surveys, there are other, well-established methods, that can increase response rates while maintaining the survey's representativeness. Examples include providing financial or nonmonetary incentives. 41 Informing respondents that they stand to benefit from answering survey questions in a specific way is not an appropriate method to address DCWP's participation concerns because it biases the survey. 42 28. With respect to Flaw #2, DCWP responded as follows: [T]he Department's decision to ask workers about whether they purchased specific accessories, as opposed to an open-ended question about expenses, followed from initial testing with delivery workers in which respondents had difficulty recalling the accessories they purchased without prompting. Had the Department adopted commenter's recommendation, it would have led to an under-estimate of accessory expense. [T]he Department did not use any responses in which a respondent was asked to report a monetary amount in its calculation of the minimum pay rate. [F]ield sw·veyors on [a separate survey] reported that delivery workers experienced a high level of difficulty completing free response format or numeric input questions, leading the Department to determine that close-ended responses were the most appropriate format for this population and essential in keeping the voluntary survey short and cognitively undemanding, which increases survey completion and representativeness. 43 29. DCWP's response does not resolve Flaw #2 because it is likely that the DCWP's use of closed-ended questions led to guessing and bias. For example, as explained by Prof. Simonson and as I described in ,r 17 above, the closed-ended format of the question regarding the number of batteries that survey respondents had purchased for their mopeds is likely to lead to biased and exaggerated estimates of the number purchased. 44 30. Notably, DCWP's response hinges on the claim that survey respondents had difficulty answering free response format or numeric input questions. Difficulty answering certain 40 Notice of Adoption of Final Rule," p. 24. 41 Diamond (2011), p. 383. 42 DCWP also noted that the participation rate of its survey was "several times the rate obtained by leading academic researchers." See Minimum Pay Report, p. 3. Thus, a potential decrease in overall participation in exchange for obtaining a representative, unbiased sample would not have been necessarily a concern. 43 Notice of Adoption of Final Rule, pp. 24-25 . 44 See, e.g., Simonson Report, ,r 30. Page 9 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 numeric input questions (e.g., "how many cell phones did you buy last year?") is more likely caused by the inability of respondents to remember accurately than by the question format (open-ended or closed-ended). Unlike a computer hard drive, that records and stores information, human memory can be more fallible. Rather than simply being retrieved, and free from error, a great deal of research shows that in many cases memory judgments are constructed on the spot. 45 When asked a question like "In the past twelve months, have you bought any delivery accessories like: GPS tracker, handle bar gloves, winter gear, rain gear, rack, food bag, basket, bungee cords, helmet, lights, horn, reflective vest, water bottle, lock, phone holster, etc.?" for example, respondents cannot simply reach into memory and pull out an exact list, because such information does not exist. Consequently, they must construct a reasonable response based on available information to the best of their ability. 31. Research shows that providing closed-ended answer options can reduce accuracy because it biases respondents to select an option even when they do not have an accurate recollection. 46 Thus, rather than achieving higher-quality responses, the closed-ended format of the NYC Delivery Worker Survey incentivized guessing. This issue is more prominent in this case because, contrary to best practices of survey design,47 the NYC Delivery Worker Survey did not allow respondents to express that they did not know or did not remember a particular answer. 48 32. While DCWP claims that it did not use any responses in which a respondent was asked to report a monetary amount in its calculation of the minimum pay rate, many of the inputs that DCWP acknowledges it did rely on are affected by the inappropriate use of closed-ended questions. For example, the questions that DCWP relies on to calculate the "[ f]requency with which workers purchase replacement batteries or phones, or trade-in phones," the "[s]hare of respondents purchasing each e-bike accessory;" or the"[ a ]verage phone purchase and/or trade- 45 Elizabeth F. Loftus and John C. Palmer (1974), "Recon Reconstruction of automobile destruction: An example of the interaction between language and memory," Journal of verbal learning and verbal behavior, 13-5, pp. 585-589; Daniel L. Schacter (2012), "Constructive Memory: past and future," Dialogues in Clinical Neuroscience, 14-1, pp. 8- 18; Daniel L. Schacter et al. (1998), "The Cognitive Neuroscience of Constructive Memory," Annual Review of Psychology, 49, pp. 289-318. 46 Diamond (2012), p. 393; Reja et al., "Open-ended vs. Close-ended Questions in Web Questionnaires," Developments in Applied Statistics, 2003, p. 163. 47 Diamond (2012), pp. 389-390 48 NYC Delivery Worker Survey Questionnaire. Page 10 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 in price,"49 inappropriately led respondents to provide an answer even if they do not remember and suggest to respondents the expected range of their answers. 33. Moreover, as explained in ,r 22 above, survey instructions that trigger self-interest motives can encourage responses that promote self-interest. 50 Returning to the survey question about purchases of delivery accessories, when trying to estimate a response to that question, knowing that their responses will help set a minimum pay rate should encourage respondents to provide higher estimates. 34. Finally, even if DCWP does not claim that it relied on other questions for the calculation of the minimum pay rate, the responses to one question can be affected by responses to previous questions in the same survey.51 The Simonson Report highlights multiple examples of questions throughout the NYC Delivery Worker Survey that are affected by the inappropriate use of closed-ended questions, by focalism bias, and by leading language. 52 DC WP cannot appropriately claim without proof that just because it did not rely on a particular question, the presence of flawed questions in the survey did not invalidate later responses in the same survey. 35. With respect to Flaw #3, DCWP responded as follows: With respect to "phantom questions," respondents to the NYC Delivery Worker Survey reported purchasing some items at very low rates (e.g. , anti- theft camera at 13 percent), putting a low upper bound on the frequency with which respondents may have reported purchasing an item for delivery work that they did not in fact purchase for that purpose. The use of"phantom questions" is not customary in government surveys and the Department's choice not to include them in the NYC Delivery Worker Survey does not invalidate its results. 36. DCWP seems to argue that phantom questions are not necessary because the survey provides a low upper bound for the frequency of certain purchases claimed by the survey respondents. However, DCWP provided no evidence that the results from the survey are indeed low. For example, it is difficult to validate if a 13 percent purchase rate for anti-theft cameras is indeed a low upper bound unless one could compare this number to an external 49 Notice of Adoption of Final Rule, pp. 23-24. 50 See, e.g., Dale T. Miller (1999), "The Nonn of Self-Interest," American Psychologist, 54- 12, pp. 1053-1060; Terence A. Shimp et al. (1991), "A Critical Appraisal of Demand Artifacts in Consumer Research," Journal of Consumer Research, 18-3, pp. 273- 283. 51 Diamond (2011), p 395. 52 Simonson Report, ,r,r 34, 36-40. Page 11 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 benchmark. Even ifDCWP claims (without offering support) that phantom questions are not customary in government surveys, it is considered best practice for researchers to try to demonstrate the validity of their results by comparing them to external metrics, or by relying on other strategies to correct any potential biases in the results. 53 Indeed, the City of New York has relied on such strategies in other survey studies. 54 DCWP, however, offers no evidence of the external validity of the NYC Delivery Worker Survey. 37. Finally, in the previous paragraphs, I explained why DCWP's responses to criticisms included in the Simonson Report do not adequately resolve Flaws #1, 2, and 3 of the Delivery Worker Survey and that, therefore, the Survey remains unreliable. I note that DCWP provided no response to Flaw #4, which is the Simonson Report's assertion that there was no attempt to verify certain responses to questions, for example, by asking for receipts to prove claimed purchases. 38. In addition, I understand that data on hours worked and income generated was provided to DCWP by DoorDash, Grubhub, and other delivery companies. 55 While DCWP seems to claim that it did not rely on any survey answers relating to these variables, 56 the data provided by DoorDash and Grubhub could have been used to test the validity of at least some of the answers of the NYC Delivery Worker Survey. I have seen no evidence that DCWP attempted such validation. 53 Elizabeth A. McCune and Sarah R. Johnson, "Chapter 11: How Did We Do? Survey Benchmarking and Normative Data" in Employee Surveys and Sensing, Ed. William H. Macey and Alexis A. Fink, Oxford University Press, 2020; Justin A. DeSimone, et al., Best practice recommendations for data screening (2015), Management Department Faculty Publications, University of Nebraska, Paper 124. 54 See, e.g., "Why New York Hires 200 People to Pretend They're Homeless," The New York Times, January 19, 2018. 55 Minimum Pay Rate Report, p. 2. 56 Notice of Adoption of Final Rule, pp. 23-24. Page 12 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 App_endixA Jonah Berger The Wharton School• University of Pennsylvania jberger@wharton.upenn.edu Academic Positions The Wharton School, University of Pennsylvania Associate Professor of Marketing (with tenure) May 2013- James G. Campbell, Jr. Assistant Professor of Marketing July 2010-May 2013 Assistant Professor of Marketing July 2007 - June 2010 Cornell NYC Tech, Cornell University Visiting Professor of Marketing July 2014-- June 2015 Fuqua School of Business, Duke University Visiting Associate Professor of Marketing July 2013-Dec 2013 Education Ph.D., Marketing, Stanford University, Graduate School of Business, 2007 B.A., Human Judgment and Decision Making (with Distinction), Stanford University, 2002 Honors and Awards Sage 10-Year Impact Award (1 of the 3 most cited articles in all Sage journals in 2012), 2023 MSI Scholar, 2020 AMA-Sheth Foundation Doctoral Consortium Fellow, 2019, 2020, 2021, 2022 Wharton Teaching Excellence Award 2019, 2020, 2021 William F. O'Dell Award, Journal of Marketing Research, 2017 Top 5 Most Productive Researchers in Marketing, AMA DocSig 2017- Outstanding Reviewer Award, Journal of Consumer Research 2015-2016 Best 2012 Article Finalist, Journal of Consumer Research, 2015 Top 30 Leaders in Business, American Management Association, 2015 Emerald Citations of Excellence, article published in 2012, 2015 Berry-AMA Book Prize for Best Book in Marketing, 2014 Top 5 Most Productive Researchers in Marketing 2009-13, AMA Doc Sig 2013 Most Creative People in Business, Fast Company, 2013 Paul Green Award, Journal ofMarketing Research, Finalist 2013 Early Career Award, Association for Consumer Research, 2013 Early Career Award, Society for Consumer Psychology, 2012 Dean's Research Grant, The Wharton School, 2012 Outstanding Reviewer Award, Journal of Consumer Research 2010-2011 Outstanding Reviewer Award, Journal of Consumer Psychology, 2010-2011 "Iron Prof' Award for "awesome faculty research," The Wharton School, 2011 MBA Teaching Commitment and Curricular Innovation Award, The Wharton School, 2011 Dean's Research Grant, The Wharton School, 2011 Young Scholars Program, Marketing Science Institute, 2011 Alex Panos Research Grant, The Wharton School, 2011 ,. Journal of Consumer Research Best 2007 Article Award Finalist, 2010 James G. Campbell, Jr. Memorial Term Professorship, 2010 Dean's Research Grant, The Wharton School, 2010 Page 1 FILED: NEW YORK COUNTY CLERK 07/06/2023 11:59 AM INDEX NO. 155947/2023 NYSCEF DOC. NO. 29 RECEIVED NYSCEF: 07/06/2023 Appendix A AMA-Sheth Foundation Doctoral Consortium Fellow, 2006 Society for Consumer Psychology, Best Student Paper Award (Honorable Mention), 2006 Management Science Institute/JCP Research Competition (Honorable Mention), 2004 Jaedeke Scholar, Stanford Graduate School of Business, 2003 Publications 1. Packard, Grant and Jonah Berger (2023) "The Emergence and Evolution of Consumer Language Research," forthcoming, Jownal of Consumer Research. 2. Packard, Grant, Jonah Berger, and Reihane Boghrati (2023) "How Verb Tense Shapes Persuasion," Journal of Consumer Research. 3. Giovanni Luca Cascio-Rizzo, Jonah Berger, Rumen Pozharliev, and Matteo De Angelis (2023), "How Sensory Language Shapes Consumer Responses to Influencer-Sponsored Content," forthcoming, Journal of Consumer Research . 4. Berger, Jonah, Wendy Moe, and David Schweidel (2023), "What Holds Attention? Linguistic Drivers of Engagement," Journal of Marketing. 5. Oba, Demi and Jonah Berger (2023), "How Communication Mediums Shape the Message," Conditionally Accepted. 6. Boghrati, Reihane, Jonah Berger, and Grant Packard (2023), "Style, Content, and the Success of Ideas," Journal of Consumer Psychology. 7. Boghrati, Reihane and Jonah Berger, "Quantifying Cultural Change: Gender Bias in Music," forthcoming, Journal of Experimental Psychology: General. 8. Berger, Jonah, Joshua Conrad Jackson, and Ceren Kolsarici (2023), "Catalyzing Social Change: Does Concentration Encourage Action?" forthcoming. 9. Weingarten, Evan, and Jonah Berger (2023), "Discussing Proximal Pasts and Far Futures," Journal of Consumer Psychology. 10. Packard, Grant and Jonah Berger (2023), "Wisdom from Words: The Psychology of Consumer Language," Consumer Psychology Review, 6(1), 3-16. Lead Article 11. Rogers, Benjamin A., Herrison Chicas, John Michael Kelly, Emily Kubin, Michael S. Christian, Frank J. Kachanoff, Jonah Berger, Curtis Puryear, Dan P. McAdams, Kurt Gray (2023), "Seeing Your Life Story As a Hero 's Journey Increases Meaning in Life," Journal of Personality and Social Psychology. 12. Berger, Jonah, Matt Rocklage, and Grant Packard (2022) "Expression Modalities: How Speaking Versus Writing Shapes Word of Mouth," Journal of Consumer Resea