What puzzles me about the social media debate is why our null hypothesis/prior should be that there is no effect.
These platforms are literally A/B tested to maximize user lock-in, addiction, and social comparison. This is pretty well known. 1/5
Why do people sometimes seem to overweigh rare events, and other times ignore them? Why do those w/ access to same information end up with different beliefs about the risk they face?
We present model + evidence for how people perceive uncertainty🧵1/9
Needless to say, I would not be here without Danny Kahneman. I was just teaching prospect theory last week to a whole new generation of students. This week I was teaching his 1973 book on Attention and Effort. A true intellectual giant.
May his memory be a blessing זכרונה לברכה
We also know that young people are willing to pay significant amounts of money for these platforms to *not exist in the first place.* ("When Product Markets Become Collective Traps: The Case of Social Media")
Kind of feels like the smoking debate from the 70s/80s, where despite associations and a compelling mechanism, lack of strong causal studies—>”no effect” null. 5/5
Unless we put zero weight on decades of social/clinical psychology/econ research on these topics, our null/prior should be a concern-worthy effect.
(If this is what we want to, fine, but then what is the point of social science if not to inform out-of-sample priors?) 3/5
Strawberry production in both Europe (gray) and America (blue) has skyrocketed.
Europe produced more strawberries by planting way more acres. America produced more strawberries without planting way more acres.
Instead, America made a better strawberry.
In this case, a lack of good studies should lead us to maintain a null/prior (a concern-worthy effect) rather than make us think that there are no psychological effects of social media, especially in groups that are still developing (young people). 4/5
I’ve seen several posts critiquing methods—on lack of power, potential issues w research design. As psych/Econs, we are trained to do this, and this is good!
But I think we are less trained to evaluate single studies from a policy perspective. 1/3
Can smartphone usage at school negatively affect learning and well-being?
In an event study conducted by Sara Abrahamsson, who graduated from NHH/FAIR last year, she found that a smartphone ban had positive effects on mental health, bullying, and grades.
Re-upping this amazing thread by
@Undercoverhist
on the history of what rationality means in economics.
To be fair, defining rationality precisely is exactly what allowed it to be overturned by Kahneman/Tversky/Allais/co.
So kudos to Samuelson/Savage and co.
1/The 2 final
@Bachelor_X
debates were on “Are Economic Agents Rational?”
so here’s a thread on how economists have defined rationality across 20th century.
Focus is on 2 debates: status of expected utility theory (EUT) in 50s & of Kahneman & Tversky’s results in 70s to 90s
This paper is great. Besides proposing a new explanation for endowment effect—cautious utility, a mechanism with independent empirical support—the model can explain the otherwise puzzling lack-of relationship between WTA and WTP.
Is the endowment effect not a consequence of loss aversion?
Cautious Utility—a new model—suggests it can arise from uncertainty about tradeoffs, and yields an endowment effect even if losses and gains are treated symmetrically:
via coauthor
@pietroortoleva
@dggoldst
These PMs are mandated by the endowment to be diversified into sectors that earned well below the S&P, so this performance is actually pretty impressive.
Really interesting paper. There is long history of using agent-based modeling/simulation in social science, both for theory test and generation (eg Schelling, SFI has great faculty, as does CMU).
This paper shows how to leverage LLMs + AI to significantly improve this process.
Can smartphone usage at school negatively affect learning and well-being?
In an event study conducted by Sara Abrahamsson, who graduated from NHH/FAIR last year, she found that a smartphone ban had positive effects on mental health, bullying, and grades.
Do we hold others responsible for their choices even when these choices have been shaped by unfair unequal circumstances?
Yes, we do, suggests "Shallow Meritocracy" from Peter Andre (
@ptr_andre
), recently accepted at REStud.
Maybe the study flaws are significant enough where the answers to 1) is no.
But it seems to me that there should be more of this type of evaluation in the discourse as well.
PS I think the critiques are super valuable. This is not a subtweet!
Here, evaluation is a bit different.
Questions such as:
1) Did this study move my priors in light of prior work?
2) Should there be deference to status quo. If smartphones were introduced now, would we allow them in schools?
are relevant. 2/3
Very cool new paper by
@avicgoldfarb
and Xiao on behavioral firms.
Inexperienced bar owners overreact to transitory shocks and exit too early, whereas more experienced owners know to “weather the storm”. Results implicate limited attention as the driver
That’s because VS made sure the assumptions held in the experiment (induced value, full information, zero uncertainty). I run the same experiments as Smith in class: breaking even one of those assumptions (eg replace induced value w/ real goods) and everything breaks down.
People pay much more attention to information about things that they own, and that changes how and what they end up learning. Great clip from Steve Jobs.
We have a paper providing evidence:
Ungated:
@andre_quentin
The fact that those p values aren’t clustered gives me more confidence in the results not less (eg everything is around .024 or something). If we are being Bayesian here, and your prior was no effect, this set of results together should lead you to update quite a bit.
@andre_quentin
But the analysis is on the school level and it's clearly underpowered to find a precisely estimated effect. Even a very large effect would produce the distribution of p values observed in the paper (it's an event study, not an RCT). My comment was using the Bayesian perspective.
@steve_tadelis
@chris_petsko
Fun fact: I was reviewing paper for top journal that used annual data. I pointed out there was similar paper using daily data + better identification strategy. Authors responded saying annual data was better because less noisy. Editor agreed, paper was published 🤯
Prediction: unless industry moves quickly with some sort of verification/watermarking, better AI models will lead to disengagement from digital spaces.
Once common prior that content may be fake and difficult/impossible to verify, people will just stop engaging/consuming it
A dynamic pricing model, similar to Uber's surge pricing, is being prepared by Wendy's. The price of items may change over the course of the day depending on the demand, with a lunch rush order costing more than "off-peak" hours.
@alexolegimas
Great points. But my one quibble is even if it’s severely biased, we can still learn from it—so the answer to 1) should almost never be no
@CFCamerer
@ben_golub
You’ll like this paper “Behavioral Foundations of Model Mispecification”.
It links model mispecification lit to the behavioral approach of modeling specific heuristics/biases, so you can use results of former to study learning outcomes of latter
@lakens
Fun fact: I was reviewing paper for top journal that used annual data. I pointed out there was similar paper using daily data + better identification strategy. Authors responded saying annual data was better because less noisy. Editor agreed, paper was published 🤯
@andre_quentin
There are two sources of uncertainty, model-uncertainty (here: researcher degree-freedom, hence my point on lack of clustering) and within-model-uncertainty (here: effect of smartphone use). Since effects are sizable and none point in opposite direction, you should update prior.
How do humans interact with pricing algorithms within firms❓
We explore this question using two large-scale field experiments in collaboration with
@ZalandoTech
in a new WP.
This work is a collaboration with Tobias Huelden,
@VJascisens
and
@LarsRoemheld
.
🧵Thread below.
Just to be clear,
@BenSManning
@Kehang_Zhu
and
@johnjhorton
's methods can potentially lead to *big* improvements--potentially paradigm-shifting ones--to the agent-based method.
But I'm not an expert in this space.
Take two assets. In one case, information about the outcomes (y axis) in each state (x axis) is presented simultaneously (left fig); in other case, person sees outcomes of every state one by one (right fig).
Very cool paper.
TL;DR: cognitive noise is important for explaining forecasting data and complicates the interpretation of forecasts through the lens of bias.
🎉 Thrilled that my paper with
@dthesmar
is forthcoming in the Review of Financial Studies
@SFSjournals
. A special moment for me since this was my first project in the PhD and it's my first publication!
Paper:
A summary 🧵 ⬇️⬇️⬇️
64% of TikTok users would be better off if the app didn't exist. TikTok users would pay $28 to have others, including themselves, delete their accounts .
@UChi_Economics
' Leonardo Bursztyn, Benjamin Handel, Rafael Jiménez-Durán, & Christopher Roth
Yes! When seeing info simultaneously people are overoptimistic & choose the asset that mostly underperforms but exhibits large & unlikely outperformance (U); they select the consistently outperforming asset (F) when learning the same info sequentially. A 40% preference reversal!
@Andrew___Baker
"you'd have to pay me millions to give up seasonal farm to table" and "i don't think npr is that liberal" are pretty consistent statements 😚
@jondr44
@Undercoverhist
@elie_tamer
I like this Jonathan. I’m not an econometrician but use Manski’s conceptual insights a lot in my work.
Our Inaccurate Statistical Discrimination paper can be interpreted as Manski’s idea that preferences can’t be identified when rational expectations (strong assumption) fails
We show that perceptions of uncertainty depend critically on the interaction between cognitive constraints (memory & attention) and the learning environment---whether information is presented sequentially or simultaneously.
39% of age-eligible US scientists enlisted in the draft.
A 10% increase in exposure to enlistment leads to a 42% increase in the number of female entrants.
More women in STEM led to more female inventors.
So the binding constraint is fewer in physical sciences
@PMoserEcon
It’s official.
@chicagobooth
Econ PhD had a great year! 💪💪💪🥳🥳🥳
My cohort was the first of larger cohorts. Booth Econ PhD previously only had 0, 1, 2 students a year.
Booth Finance and other well-established Booth programs placed great as always:
@squig
@GordPennycook
What I don’t understand is why our null hypothesis/priors should be that there is no effect.
These are platforms a/b tested to maximize lock in, addiction, and social comparison. Unless we put zero weight on decades of social/clinical psych, the null should be sizable effect.
When learning from simultaneous information (e.g. a price chart), limited attention is drawn to rare but salient events, which leads people to overweight those states. When learning the same information sequentially, imperfect recall leads people to underweight those same states
Our paper with Josh Jackson on whether global values have converged over the last 40 years is out: . We find that globalization and rising wealth did not result in uniform acceptance of 'Western values,' such as support for abortion and gay rights.
This result is consistent with the so-called ``description-experience’’ gap, but we show that it's entirely driven by interaction of attention&memory with the learning environment. Manipulating these factors completely eliminates the gap, and even reverses it (see previous fig).
This reply is mostly in jest to
@page_eco
's q on choice to correct one's beliefs or not.
I think this will depend on beliefs about cost/benefit of holding wrong beliefs. Most ppl will go through great lengths to hold correct beliefs if they think there is instrumental benefit.
@SandroAmbuehl
I think those models had the most accumulated evidence before alternatives were suggested. I would not call this first mover advantage as many alternative behavioral time/social pref/risk preference models were proposed contemporaneously but did not find as much support.
People can learn information about uncertainty by observing the distribution all at once (e.g., seeing a stock return distribution) or sampling outcomes from the distribution sequentially, bit by bit (e.g., experiencing a series of stock returns).
@arpitrage
We show that it’s not just vibes but direct preference for exclusion—people get more utility from things they know others want but can’t have—which firms respond to by restricting access
We run studies showing that explicit restriction can be profit-max
A standard assumption is if people have access to same information, then perceptions of uncertainty should be the same; choices should be function of preferences.
If this is not the case, this implies identification issues a la Manski for recovering preferences from choice data
🤯🤯🤯
Pairing GPT with reinforcement learning leads to machines training machines—result is faster and better training than w/ human trainers.
GPT suggests and refines a reward function while reinforcement learning trains the robot.
Wtf. Brave new world.
Can GPT-4 teach a robot hand to do pen spinning tricks better than you do?
I'm excited to announce Eureka, an open-ended agent that designs reward functions for robot dexterity at super-human level. It’s like Voyager in the space of a physics simulator API!
Eureka bridges the
@JonSteinsson
@m_urquiola
@R2Rsquared
@lugaricano
Market differentiation? Harvard etc accept pool w/ more precise signals (eg right schools, right recs), Chicago accepts larger pool w/ noisier signal and selects after conditioning on info. Chicago wouldn’t necessarily lose best students since they wouldn’t get into former group
These results have implications for recovering preferences for risky choice data. In our studies, the large shifts in choices are driven by changes in beliefs, which are systematically distorted by the learning environment.
Feedback very welcome!
@ak2912
Still remember going to the market and seeing shelves and shelves of mayonnaise but absolutely nothing else. Turns out market prices are a good thing :)
@ShengwuLi
@_alice_evans
I think this will be a generic result with rise of AI: people will screen *much* more heavily, to the point of potentially exiting digital spaces (that’s a personal hypothesis).
Note underlying distributions of assets are actually the same, but shifted. But one asset (U) underperforms most of the time relative to other (F), except in one state it outperforms by a lot
People saw the *same* info in both learning environments. Do beliefs & choices differ?
@lakens
@benleo_econ
@eugen_dimant
Econs tend to write fewer papers in general because the papers are bigger (literally). Econ journals (including those outside the top 5) expect tons and tons of robustness checks, alternative specifications---basically for the paper to have all the bases covered.
@dggoldst
And this is from a relatively small user base too. I took a few-months break and, coming back, it’s made even more stark how different it is from the heyday. Sad.
@Andrew___Baker
I mean the fact that we are still on this stupid site that is now being run by a very tired intern and 7 hamsters in wheels, says something.
@andre_quentin
Right, and that is the model uncertainty part. If the reported effects are cherry picked amongst the set of potential DVs, then I agree with you. But I don't see evidence for this here---the DVs appear theoretically informed and I can't think of clear DVs left out.
@arpitrage
@paulgp
Absolutely. It’s worth thinking why status quo is given deference here. If smartphones were introduced now and you decided whether to allow them in schools, what would be your call?
Like w smoking, perfect research design may not happen. Have to do policy w imperfect evidence
@SandroAmbuehl
I'm really happy that math psych is making it's way into behavioral econ again. Besides Josh, Sam, and Tom's work, Jennifer Trueblood has amazing papers, as does Jerome Busemeyer, Joseph Johnson, and Tim Pleskac eg.
.
@lakens
@benleo_econ
@eugen_dimant
This is especially true for job market papers, which tend to be even longer. There is simply not enough time during a PhD to write many papers of this sort and to have them published.
@squig
@H_Sjastad
This is precisely what I was thinking. The mechanism seems to be that if you get me to think about something I would have not otherwise considered, my attitude converges to it. That doesn’t seem like dissonance to me.
@Andrew___Baker
@mattkahn1966
I don’t think behavioral economics has anything to do with whether we are able to adapt, other than predicting people wont be concerned enough to generate top down change (but given the decentralization of the problem, I don’t know what a top down solution even looks like)
@mattkahn1966
As a behavioral economist, I agree with you! I think these are two separate arguments. One: will there be enough of a concern from the public to push the govt for a top down solution. This is what the BE folks are saying won’t happen. …