Do we now have conclusive proof that masking works? No. Do we have data that strongly suggests this to be the case? Yes. Is it all wrapped up in questions of how to interpret such studies, and the inherent difficulty of studies that can only attempt to approximate an experiment rather than truly being one? Also yes.
Here’s the background: a group of researchers and aid workers from Poverty Action, funded by the charity Give Well, undertook a massive study in Bangladesh to test mask-wearing — but you can’t simply force one group of people to wear masks and prohibit another group, particularly at the village-by-village level, so what they undertook were a series of actions designed to promote mask-wearing.
To begin with, they designated certain villages “control” and others “treatment” in the same way as, with a test of a medication, a certain group would get the placebo and others, the real medicine. The control villages received, nothing, but the “treatment” village, through a process of randomization, either were given cloth masks or surgical masks, for the duration of a 10-week period, and, with further randomization, were given further inducements to wear masks, such as encouragement from imams and other “village elders,” or texts from experts encouraging mask-wearing as an altruistic action or for one’s own benefit, or other such encouragements. They then measured the degree to which these inducements resulted in more mask-wearing, by having observers count the number of people wearing masks in public places, and found that they were able to triple the rate at which people wore their masks in public.
Now, to be clear, the entire “package” was implemented for the main test group: free mask distribution alongside encouragement to wear the masks and role-modeling by public officials and community leaders, which included a video with the head imam and a national cricket star shown when the masks were distributed, as well as promotion by local imams during Friday prayers using a scripted speech and further unspecified “mask promotion in public spaces.” Some villages also received monetary or non-monetary incentives for village-wide compliance, a program of asking households to commit to mask-wearing with a pledge and a front-door sign, and a set of text messages; and villages were also randomly assigned to receive either cloth or surgical masks. After 8 weeks, they stopped the “intervention” but kept tallying mask-wearing for a further two weeks.
The result was that mask-wearing in the treatment villages increased by 29 percentage points or, using a different method of analysis, 28.1 points, relative to a baseline of 13.3%. Surprisingly, the additional boosting efforts had no effect: none of the by-village monetary incentive, text-encouragement, or public front-door sign program made a difference – in fact, these most likely reduced levels of mask-wearing. The only factors that were associated with greater rates of mask-wearing were being given a surgical (rather than cloth) mask and being given a mask that was blue rather than green or purple rather than red.
(Yes, really – the effect on mask-wearing of having a purple cloth mask was quite substantial. The mask colors were used to identify people who had received different sorts of “private” nagging in terms of the text messages, but the meaning of the color used was varied between village. Green symbolizes Islam; was this color seen as sacrilegious? Was there a different, negative connotation to red, or positive connotation to purple? This unexpected difference is a bit disconcerting, because it suggests that the researchers did not have the understanding of local culture that they should have to conduct research, even if there’s nothing else fishy about it.)
But this was only the first step in their study. Their larger objective was to measure the degree to which the free-mask/mask-encouragement-induced greater mask-wearing reduced covid cases – and, indeed, they find substantially lower rates for the treatment than the control groups, based on testing everyone who reports covid symptoms during the study period. (Two complications here: first, they only tested those who reported symptoms, and only about 40% of those reporting symptoms agreed to be tested.)
The results here are fairly dramatic, or are so at first glance, at least: relative to the control villages, those villages given surgical masks had an 11% reduction in (symptomatic) covid prevalence over the 10 week period. For those above 60, those at highest risk, the results are even more dramatic, a decrease in infection of 35%. Considering that, even with these interventions, fewer than 50% of people observed wore masks, this suggests that consistent mask-wearing by everyone would have an even greater effect. And, in fact, the authors do the math of how much it cost them to provide the masks, the mask-promotion, and the mask-wearer counting, to conclude that it is entirely feasible, in terms of lives saved, to expand these efforts.
The study also looked at the impact of mask-wearing on physical distancing – not so much because it was their goal to push Bangladeshis into more distancing but because one theory was that mask-wearing would, due to risk-compensation, result in people distancing less. Instead, within mosques, people distanced as much as before, but in other circumstances, distancing increased.
But the study left a number of questions unanswered – or, at least, I didn’t see the answers.
We know that treatment villages were given masks and non-treatment villages were not – but the latter villages were still surveyed by phone and asked about symptoms, then those reporting covid symptoms were asked to test, which about 40% consented to. The study did not indicate what percent of villagers responded to the survey, or how they perceived the study, or whether they resented being called and asked questions when only the neighboring village, not they themselves, received masks.
The study also did not report on any issues of variation within the treatment villages, except to the extent that standard errors are reported for mask-wearing (and, honestly, I’m not good enough at the stats part to get a sense of interpretation here). Was there an (inverse) correlation between village-wide mask-wearing and covid prevalence? That would make the relationship between masks and covid-reduction clearer. Is there a reason why this statistical calculation/test would be invalid? The villages are all also presented as simply generic “one no different than the next” villages, and maybe that’s indeed true, or the randomization process makes differences irrelevant, but I would imagine that there are still real differences, whether they be a matter of some regions of the country being richer or poor than others, or having different age pyramids (different fertility rates, different rates of out-migration to the city).
Also, all observations were conducted outside except for mosques, because there simply weren’t non-mosque indoor spaces. But it is generally not considered particularly risky to wear masks outdoors, and the paper doesn’t state whether villagers were told to wear masks any time they were outside their own homes, or what instructions in particular they were given regarding times and circumstances in which it was necessary to wear a mask, and when the risk was low enough not to. Or is Bangladeshi public/outdoor life as crowded as indoor American life?
Another surprising element is the two pilot studies that informed their ultimate large-scale study. In the first study, they had free masks and an educational campaign, and boosted mask-wearing rates by 10.9 percentage points. In the second pilot, they added the presence of workers whose role was to “remind” villagers to wear their mask, and they boosted the rate to a level matched in the final study, 28.4 percentage points. Honestly, I have trouble making sense of this – isn’t a village in Bangladesh exactly the sort of place where outsiders would be very visibly “outside” and not able to persuade much? Or were “locals” hired in this role? This seems to be another “cultural” issue. As it happens, one of the criticisms of the study is that symptoms were self-reported rather than based on objective testing, so that if the villagers in test villages believed that there was a particular reason to minimize symptoms (to prove they were compliant, to avoid dishonoring the village, to show loyalty to village elders, etc.), this would cause problems with the study, and their surprising degree of responsiveness to individual “persuaders” suggests to me that this is possible.
Another issue is the differentiation between surgical and cloth masks. The key data element is, again, that control villages had a prevalence of covid of 0.76% cloth mask villages had a prevalence of .74%, and surgical mask villages, .67%. There was therefore no statistically-significant effect from cloth masks – which of course should raise concerns for places such as the US where “even a bandana will do” has been the operative approach. But in any case, there was a higher rate of mask-wearing for surgical mask villages, even though the difference wasn’t statistically-significant. It does nonetheless raise the question of whether the surgical mask was what made the difference, or the greater likelihood of mask-wearing in surgical-mask villages.
Another issue: age group differences. For surgical mask villages only, they split out covid rates by age. For those younger than age 50, there was no difference in covid infections between this villages and the control villages. For those 50 – 60 years old, there was a decrease of 23%. For those over age 60, there was a decrease of 35%. What would account for this difference? The study does not identify different mask-wearing rates for different ages (presumably they did not attempt to guess the age of the mask-wearers or non-mask-wearers they saw), and, in theory, this shouldn’t matter, as the theory of mask-wearing is that it protects others, so that the entire community should see declines. However, the study (to prove risk compensation was not happening) showed that there were greater degrees of physical distancing in treatment villages. Did the project of mask-wearing result in overall greater degrees of caution, especially among older Bangladeshis?
This is a point of contention among critics, as well as the general element of the increased physical distancing. If physical distancing could be the cause of reduced spread, or if other elements explain the reduction only among the old, then did the intervention “work”? Or, rather, what does it mean to say the intervention “worked” if it was plausibly the knock-on effects of mask-wearing and what we want to demonstrate is that masking can substitute for undesirable alternate interventions like distancing or lockdowns?
Here are some other criticisms I’m seeing.
First, from an anonymous commenter on twitter: the difference between cloth and surgical mask-wearing isn’t statistically significant when measured with something called an “intervention prevalence ratio,” which is more-or-less the difference in rates provided above. On this basis, a confidence interval for either cloth or surgical mask shows that there is definitely a decrease in covid prevalence, but, because of the necessary differences in standard error for the smaller sample sizes for each group individually, the confidence intervals for cloth vs. surgical individually are wider, overlap, and are not even definitively proven to be effective, with only the surgical mask being significant at the 10% level. Even with the large number of villages recruited into the study, the overall prevalence rates were low enough so as to not definitively establish the desired conclusions. Given the uncertainties in the study in general, you’d really like to see some slam-dunk numbers here.
Second, a substack site “bad cattitude” levies a number of criticisms. Some of them are, I think, too nit-picky, for example, leaning very heavily into complaints that the authors did not definitively establish that the villages were truly sufficiently identical to each other for the randomization to be effective. In particular, they did not have a starting value for covid-prevalence, just the ending point. It seems unlikely to me that there would have been such a difference as to have invalidated the study but he says “this is a tiny signal (7 in 10,000) [so] we need a very high precision in start state” and “even miniscule variance in prior exposure would swamp this.” It would be helpful to have seen some math demonstrating the possible effect of different levels of variance that are within statistical possibility.
This author’s larger criticism is of the self-reported nature of symptoms that I observed earlier. Now, we already know that there are two elements of Bangladeshi village culture that are “non-WEIRD” — the fact that mask color has a statistically-significant effect on whether villagers choose to wear them, and that mask-reminders have a dramatic impact on use. (Just try to imagine that happening in small town USA!) The substack author also points out that there was a very wide discrepancy between self-reports of mask-wearing (80%) in their own prior survey and actual use. It seems to me likely that Westerners cannot necessarily predict how Bangladeshi villagers would respond to being given masks, then being called and asked to self-report whether they have any of a set of symptoms, but it also seems to me that there’s a good chance that their response would not be the same as Americans, in one direction or another. That site also quotes twitter account @Emily_Burns_V, who says, “Is it possible that highly moralistic framing and monetary incentives given to village elders for compliance might dissuade a person from reporting symptoms representing individual and collective moral failure — one that could cost the village money? Maybe?” And, indeed, the study’s authors say that there was no effect of the text-nagging or the incentives, on mask-wearing, but do not report whether there are differences between these groups, and the symptom-reporting.
Finally, Bad Cattitude has an interpretation of the age-differences which seems more plausible than “masking had a greater effect on the old”: “the odds on bet here is that old people were more inclined to please the researchers than young people and that they failed to report symptoms as a result.”
One last set of comments on the study, from researcher Lyman Stone, again via twitter. He defends the study authors against the accusation that they failed to pre-test to establish a baseline, by saying that the study authors themselves acknowledged that this was still underway, and this was, after all, a working paper, not the final product, and reports that it is the norm to provide preliminary reports even when the data analysis is complete.
Stone also observes that the differences in results between cloth and surgical masks is an indicator that there was a sort of unplanned “blindness” to the study, in that both the cloth and surgical mask recipients were aware they were a part of a study, so if the effects we see are a result of their response to this, we’d see the same effects for both cloth and surgical — but we don’t. (Of course, Stats with Cats observes that the difference between the two groups is not a slam dunk because of confidence intervals, and, as tempting as it may be to do otherwise, it is important to take the confidence intervals seriously.) (For what it’s worth, Bad Cattitude rebuts the rebuttal in a follow-up piece.)
The bottom line: when this study first came across my twitter feed, I enthusiastically retweeted it. Now I’m disappointed — I would have really liked to have seen more answers, and be left with fewer questions that mean it becomes “one data point among many” rather than the slam-dunk evidence that some of its promoters think it is, especially since the whole debate has now resulted in mask-promoters asserting that mask-wearing is always and everywhere cost-free while ignoring that for some people it creates real health issues and for children, poses risks of developmental delay.
Finally, as a reminder for those who don’t know my background, very early in the pandemic I was not merely an enthusiastic mask-wearer, but a die-hard mask-maker, donating some 150 of them to healthcare workers and others, which means that anyone who judges these comments as those of a crazy anti-masker wholly misunderstands them.
What does it mean for the Trust Funds to become insolvent, anyway?
Yes, readers, I have gone back to school and am studying economics. And I’m killing two birds with one stone by writing my commentary on a class-assigned paper in blog format.
The paper in question is, in fact, the 1994 paper which shifted economists’ thinking on the minimum wage, because of its claim that minimum wage boosts had no ill effects on employment and were, basically, “free money.”
Here’s how Vox characterized it:
[F]or years many economists assumed, almost without questioning, that minimum wages destroyed jobs. They might be worthwhile, sure, but you have to weigh the harm they do to the demand for labor against their benefits for workers who remain employed.
In a paper first published by the National Bureau of Economic Research in 1993, Krueger and his co-author Card exploded that conventional wisdom. They sought to evaluate the effects of an increase in New Jersey’s minimum wage, from $4.25 to $5.05 an hour, that took effect on April 1, 1992. (At 2019 prices, that’s equivalent to a hike from $7.70 to $9.15.)
Card and Krueger surveyed more than 400 fast-food restaurants in New Jersey and eastern Pennsylvania to see if employment growth was slower in New Jersey following the minimum wage increase. They found no evidence that it was. “Despite the increase in wages, full-time-equivalent employment increased in New Jersey relative to Pennsylvania,” they concluded. That increase wasn’t statistically significant, but they certainly found no reason to think that the minimum wage was hurting job growth in New Jersey relative to Pennsylvania.
Card and Krueger’s was not the first paper to estimate the empirical effects of the minimum wage. But its compelling methodology, and the fact that it came from two highly respected professors at Princeton, forced orthodox economists to take the conclusion seriously.
And with that in mind, join me as I dig through the meat of the study: “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania.” (This is not actually the “class assignment”; I will need to distill my thoughts even further into 250 – 300 words, which will be harder!)
The core concept of the study was this: generally speaking, it’s hard to measure the effects of a change in the minimum wage, because there’s so much much else happening at the same time. For example, the latest change in the US federal minimum wage occurred at the same time as the “Great Recession.” But in 1992, New Jersey increased its minimum wage to a level above the federal minimum, $5.05 rather than $4.25, and next-door Pennsylvania did not. The authors believe that looking at changes in employment patterns, wages, costs, etc., at fast-food chains in those two states provide a means of analyzing the impact of the minimum wage hike.
In order to do so, they (or rather their employees) conducted phone surveys of fast-food restaurants in those two states in late February/early March of 1992, just before the minimum wage hike was implemented, and in November-December 1992, after the April 1992 change had had some time for effects to be seen. They had, all things considered, reasonable response rates to their surveys (72.5% in PA and 91% in NJ, with different numbers of attempts made in the two states) for the first wave, and bolstered their response rate for the second wave with in-person visits as needed.
Their core findings:
In the New Jersey restaurants, the number of employees per store actually increased during this time frame, even as they decreased in Pennsylvania due to the recession at the time. At the same time, within New Jersey, among stores which had previously had a starting wage equal to the minimum wage, as well as those stores with a starting wage above the minimum but below the new minimum, the number of employees increased; but in those stores where wages were already above the minimum, employment decreased.
The authors then get mathier. They perform two regressions, one to estimate the effect on employment of a store being in New Jersey, and another to estimate the effect of a store having previously paid less than the new minimum wage. This is where my interpretation is a bit marginal, but here goes:
The change in the number of FTE employees per fast-food restaurant in NJ compared to the change in PA, was 2.51. The regression model calculates, stripping out other impacts, that New Jersey-ness accounted for 2.3 new employees per store. Having to raise wages (compared to NJ restaurants already paying the new minimum) produced a regression factor of 11.91 times the “wage gap” when controlled for differences in different regions within NJ as well as for differences among the different large chains surveyed; this is a high factor because it gets reduced by this gap-factor, which is .11.
They also perform additional statistical tests by fine-tuning their calculations, for example, excluding New Jersey shore area stores because of their tourist economy, adjusting the weightings of part-time employees in calculating full-time equivalents, etc. These produce different employment impacts but still the same conclusion, that the minimum wage increase actually increased employment.
The authors also assess whether the minimum wage hike affected other aspects of the restaurants’ operations. There was an increase in full-time workers in New Jersey, but no significant effect with respect to restaurants who had paid less vs. more in NJ. There was no statistically-significant change in the restaurants’ opening hours, the free/reduced-price meal benefit, the amount of the first raise or the time until that first raise is given.
They did find that prices in New Jersey increased by 4%, a slightly greater increase than would be needed to make up for the higher wages (taking into account the wage increase and the proportion of the restaurant’s costs due to labor), but they discard this as a relevant consideration because prices increased at the same level regardless of whether an individual restaurant was impacted by the wage hike or not (based on whether their starting wage had been below the new minimum or not).
Finally, they assessed whether the wage increase prevented new stores from opening, looking at broader data, and found no statistically-significant evidence.
After presenting their statistical tests, they propose various explanations. They consider alternate labor-market theories “monopsonistic and job-search models”), but discard them. They theorize that employers obliged to pay higher wages may decrease their quality (longer lines, reduced cleanliness) or may shift pricing of some products relative to others, but ultimately conclude with the simple statement that “these findings are difficult to explain.”
So what’s to be made of this?
Their analysis is certainly more useful than one without any “control group” and it’s the new “one weird trick” of economists to find and exploit what they consider to be “natural experiments” (though I suppose “new” is all relative). It also has, I think, particular merit in looking at employment at specific businesses, rather than at unemployment rates across a region, so as to drill down to the question of “how do employer manage an increased labor cost?”
But there are plenty of deficiencies:
One common gripe of Krueger’s critics (e.g., at the Foundation for Economic Education) is that the time frame of Krueger’s analysis is simply too narrow. By late February, employers already knew they would need to offer a much higher minimum wage, and would likely have been taking that into account by avoiding hiring and reducing staff with attrition. It could even be that the increase in employees was an indicator that they found, on average, that they had been too cautious in the period leading up to the hike. It also seems likely that employers wouldn’t have been sitting on some innovation that would allow them to reduce staff which they would suddenly implement immediately upon increasing wages, but that labor-reduction initiatives would take time, so that the long-term effect of the wage hike would take some time to materialize. (For example, the free refill was introduced by Taco Bell in 1988, but became commonplace in the 90s. Was this merely a coincidence that this marketing tactic occurred roughly at the same time as a significant nationwide minimum wage increase, with a phase-in that was driven by the time and effort to remodel locations, or did stores find it more advantageous to reduce worker time in this fashion, when labor increased in cost? Other changes, such as the self-service ordering kiosk, required advances in technology that will presumably be motivated by higher labor cost but not simply “waiting in the wings.”)
It also seems too simplistic to simply discard the increase in prices just because those prices increased at all New Jersey restaurants, including those which had already been paying higher wages. It would seem fairly reasonable that once the previously-lower-paying restaurants had increased their prices, the rest would follow, or that, if certain franchise owners had a mix of higher- and lower-wage restaurants, they might have raised prices in parallel. Consequently, this consistent price-hike across stores is not the counter-evidence Krueger claims.
In fact, it would seem to merit a closer review, to identify the characteristics of those restaurants previously paying higher wages, especially because they did not boost their wages to remain a “higher wage employer.” Were these particularly-profitable restaurants? Restaurants which had difficulty recruiting employees due to locally-tight labor markets? For instance, restaurants in wealthier suburbs tend to recruit workers from further away and offer higher wages to make the additional travel time worthwhile. Would they, in the longer term, have difficulty finding workers without boosting that wage differential?
Lastly, they measure the impact of the wage increase on overall work hours by asking whether the opening hours have changed, whether the number of cash registers have changed, and whether the there is a change in the number of cash registers typically open at 11:00 AM. But it seems likely that a key way that employers will seek to mitigate the effect of a wage hike is by lower staffing at slower times in the day, either by scheduling employers for fewer hours, or by being readier to send employees home. And they ask whether employees work on a full- or part-time basis but do not actually ask in their survey what the total or average work hours is at each surveyed store. Perhaps this is a piece of information that they considered too difficult for store managers to provide, so did not ask it so as to ensure they would receive a response to their request, but without knowing this, we simply cannot know whether the study’s data is what it claims to be.
Now, I’ve said that this is considered to be a key study that shifted the debate about the minimum wage, and, it turns out, it wasn’t without pushback. Richard Berman of the Employment Policies Institute criticized the study in a 1996 report, “The Crippling Flaws in the New Jersey Fast Food Study,” and Krueger and Card responded with their own criticism of Berman’s criticism, as well as a further study by economists William Wascher & David Neumark (not available without paywall), in 2000. Krueger finds fault with the attempts by Berman and by Wascher and Neumark to re-do the analysis using better or alternate data sources, but does not directly address Berman’s “crippling flaws” (or if they do so, it is so briefly addressed that I missed it). What were these flaws? First, that there were significant numbers of stores with clear data errors, such as shifts in the number of part-time and full-time employees as well as a failure to specify, in the price-increase portion, what defines a “regular hamburger” (is it a Big Mac? A Quarter Pounder? A dollar-menu basic hamburger?). EPI researchers went back to many of the surveyed restaurants and could not match the employment numbers, and Berman believes this is simply because of inconsistencies in definition of part vs. full-time and the basic fact that the manager or assistant manager answering the survey would have been juggling multiple duties and relying on memory for these numbers. In any event, Krueger and Card dial back their claims, from 1994’s statement that “we find that the increase in the minimum wage increased employment” to a more cautious, “the increase in New Jersey’s minimum wage probably had no effect on total employment in New Jersey’s fast-food industry, and possibly had a small positive effect.”
I’ve long been a critic of the Obama Museum, which will not be a “presidential library” but literally just a museum as well as ancillary services and programming. But with the construction now beginning, I finally got around to looking at Obama.org‘s projection of economic impact, and it’s worth evaluating.
Short-term jobs (construction)
The building is expected to employ 3,682 people, with a total of $214,635,630 in “labor income,” during the course of construction. That’s an average of about $60,000 per job — because these are mix of various types of jobs, but all short-term.
Ongoing payroll for the Obama Center is forecast as $19 million in payroll. However, Only 43% of the jobs are expected to be held by South Side Chicagoans, with only 16 people employed in “admissions,” for example, and 10 in “Museum Operations and Administration.” Security guards and janitorial staff will be contracted out rather than directly employed.
The consultants predict $3.1 million in revenue for the planned four-star restaurant and cafe, but recognize that only 25% of the revenue will be “new” (that is, that many of the diners would have otherwise eaten elsewhere). They forecast $6 million in gift shop revenue. They forecast $110K in “net new” private event spending, because 80% of private events held at the venue would have been held elsewhere in Cook County.
The forecast for museum attendance uses an upper bound based on a hypothetical maximum based on the number of opening hours, fire capacity, average visit length, etc., then multiplied by a factor of 30% to reflect utilization, and a “historical and cultural significance multiplier” of 1.15 (that is, the expectation that the Obama museum will be exceptionally popular) — which, honestly, seems fairly suspect. The lower bound is calculated based on actual visitors to real-world presidential museums for recent presidents — but using some math which determines that, even though the highest visitor counts from any of these (excluding the first opening year) was 426,000 for the Reagan museum after it became the recipient of Air Force One, the Obama Museum would have 50% more visitors than even this high, because of the greater size of the Chicago metro area and the number of tourists.
Ticket prices are expected to be $18 per adult, $11 for children, $10 for out-of-state students, and free for in-state students. Parking cost would be $22. In addition, visitors are forecast to spend on average $5 in food purchases and $10 in the gift shop.
Outside the museum, they calculate that visitors will spend
$45 per person for lodging, for in-state out-of-town visitors, or $112 for out-of-state visitors. Why out-of-state visitors would spend more on their hotels is not clear.
$19 per person in retail spending, for in-state out-of-town visitors, or $56 per person for out-of-state visitors. This category is not at all clear to me. Are they saying that people will travel to Chicago specifically for the Obama Museum and, once here, will take in a bit of Magnificent Mile shopping?
$32/$102 per person for spending on food. Again, the only way this makes sense is if they assume the visit will be motivated by the Obama Museum, rather than it being an add-on to an existing visit. Or do they “take credit” for longer visits on the assumption that the Obama Museum will be the tipping point in people deciding on Chicago in their vacation planning?
This was the part that was the biggest surprise: we kept reading about how the Obama Center will contribute to the public good with conferences of various kinds. But those aren’t free. However, it is not clear to me to what extent the registration fees are meant to cover the cost of the event, whether it’s subsidized, whether some participants will have a reduced fee, etc.
Their largest event is planned to be an Annual Summit with 5,000 participants. Each of them will pay on average $577 for the event (the unround number suggests some would be given reduced rates), for a total revenue of $2.9 million for an event expected to cost the Obama Museum $4.2 million. Where the additional funds come from isn’t explained — is it from the endowment?
Air Force One non-sequitur
Finally, the document closes with a slide on the “possible impact of Air Force One exhibit” — but this is an appendix and we don’t know what the accompanying talking points were. Is it meant to suggest that the attendance numbers used for calculating estimates, were overstated? That they hope to get a similar “big draw” here? Dunno.
What about the rest of the Center?
The Obama Center won’t just have a museum.
There will be a new public library branch there. Honestly, it is not at all clear whether the money for this is coming from the Obama Foundation or whether the Public Library is simply using their own budget, and, in fact, whether the space will be provided or rented out. Similarly, there will be a “program, athletic, and activity center” with “recreation, community programming, and events.” Will these activities be provided free of charge, for a fee, or by means of the Chicago Park District using this as a site for its programming?
None of these other activities are reflected in the impact calculations; if the generosity of donors worldwide was expected to benefit Chicagoans through use of Obama Foundation funds on these activities, you’d expect to see them taking credit for this. What’s to be made of its absence?
In any case, the fight against the Museum appears to be over. What remains is a fight to ensure that public funds are not spent on its ongoing expenses. But, unfortunately, Chicago being Chicago, and Illinois being Illinois, this is likely to be a losing battle.
Rewarding essential workers? Sure. Borrowing unknown sums of money to do so? Not so great.