The end times portfolio is long
Do negative potential effects of AI imply shorting the market?
In general, it's a very good sign to have skin in the game: to reflect your claims in your deeds. For instance, the AI safety movement worries about AI causing an enormous catastrophe in the near future. What would be skin-in-the-game for this position?
Tyler Cowen argues that the way for them to show their sincerity would be to short the market, since they expect huge damages from AI eventually. (He further implies that their not shorting the market makes light of the x-risk view.)
Others argue that those worried about xrisk should go long on AI-related stocks – since actually, if AI is to be powerful enough to cause catastrophe, related stocks will have incredible gains for some period before that. It also buys insurance.
Another way of betting your beliefs would be to increase your current consumption (since you don't expect to be able to realise investment gains).
And another is to zero out your retirement contributions, since you think you won’t get to draw down.
So, does a lack of “endurance to collect on [your] insurance” imply shorting the market? Note that this is not investment advice.
Model
To answer which approach is the best, we need to model a portfolio that allows us to test each option. In our portfolio, there is three assets: first, a risk-free bond, which gives us the return r; second, a risky AI stock, with a mean return of μ₁ and a standard deviation σ₁₁; and third, a risky non-AI stock, with a mean return of μ₂ and standard deviation σ₂₂.
We want to maximise our return, while minimising our risk, and so we can represent returns as the annual returns for every year in the future, discounted back into the present, and our risk aversion with respect to income1
Finally, provided that the two stocks are not perfectly correlated, we should reduce our portfolio risk by owning some of both of them. The more uncorrelated the two stocks are (correlation is measured by σ₂₁), the more we would like to spread between them (holding returns constant). This means that the equation to calculate our optimal holdings and consumption looks like this:
Several things immediately appear. Firstly, the optimal portfolio is independent of discounting - so high estimates of doom do not suppress stock holdings. This is because as the mean return and standard deviation of stocks doesn’t change across time, the risk profile of any given allocation is time-invariant - so you don’t want to change your desired portfolio if you start valuing the future more or less vs the present.2
Secondly, if covariance is negative and both risky assets return more than bonds, optimal holdings of non-AI stock are always positive. This is because negative covariance means risk can be reduced by holding at least some of each stock, and the second condition means doing so is a better deal than bonds.
Thirdly, although optimal consumption is increasing in both discounting and expected returns, η determines the relative degree of each - under low levels of risk aversion expected returns have no effect (η = 1), while under high levels mortality is greatly suppressed in importance.
Parameter estimation
High estimates of doom are theoretically compatible with high estimates of covariance though - so we need to establish upper bounds of individual’s beliefs on this to check whether any set of beliefs could justify shorting the market.
μ₁: The mean return of AI stock. Under Manifold’s, a prediction market, estimates for when OpenAI firms reaches a $1trn valuation (current valuation: $150bn), expected returns would be 52%/year if valuation growth was monotonic and the firm was worthless if it never reached $1trn. However, Manifold also estimates a 62% chance at least one of OpenAI and Anthropic drop below their current value by 2030 and 34% OpenAI no longer exists by 2030. Many of these trades are on very low volume markets, but if Anthropic and OpenAI are equally likely to no longer exist and have reached a valuation of $1trn at one point in half of those worlds, then this implies a 44% mean return.3
Of course, OpenAI and Anthropic aren’t available to buy in public markets. What is the maximum the heavy-AI public portfolio could return?
If Deepmind was valued at $100bn, given Alphabet’s $2.1trn valuation this only raises returns by 5.6% to 2035. If chip manufacturers can capture 10% of additional value created - noting that this would be 4x the AI firms themselves - this would given Nvidia, TSMC and ASML’s combined valuation of $4.42trn boost their returns by 14.7% to 2030. If they capture as much, this again falls down to 5.5%. The 5.5% excess return (so 15.5% total return) will be used here.4
σ₁₁: Standard deviation of AI stock return. In each of the above cases the standard deviation was 1.02 and 0.46 respectively.5
μ₂: Non-AI stock return. 10%.
σ₂₂: Non-AI stock standard deviation. 15%.
σ₂₁: Covariance. This could be negative due to direct AI damage (including existential risk), or automation or similar effects increasing output but displacing existing firms. As noted above, to the extent that this is true optimal stock holdings are positive. Positive covariance could be driven by positive innovation spillovers to the rest of the market. Firms in general capture 2.2% of the value they create, boosting average returns over the next 10 years by 12.4% in the narrow specification, or by 0.6% in the broad specification.6
r : Risk free rate. 5%.
ρ: Pure time preference discount rate. 2%.
η: Relative risk aversion. 2.
Results
Most importantly, non-AI stock holdings are always positive, and increase as covariance declines - so higher estimates of damages do not imply shorting the market. This is because if the stocks have negative covariance, they serve as a hedge against each other - a hedge which is more valuable the greater the expected return from AI.
The differences in consumption suggested in the table above are very small. This is because wealth growth rates don’t differ much across scenarios - in the optimal allocation, the additional income from access to very high-return AI stocks is used mostly to buy safe returns rather than increasing income. Higher wealth growth rates are then cancelled out by higher portfolio variance, which makes wealth more important as a risk-hedging tool.
What about existential risk pushing up discounting and thus consumption?
Optimal consumption is high enough in all these models any such effect to be undetectable for most individuals. For someone with 40 years of working life remaining discounting at 3% (5% less 2% income growth), their income flow is already only 4.2% of lifetime wealth - so they are credit constrained, desiring to borrow more at the risk free rate but unable to do so. Income only crosses 5.7% of remaining lifetime wealth 24 years before work concludes, so in an individual’s early 40s.7
Discussion
There are several issues in general with applying this kind of portfolio model - most wealth for most individuals are in future wages not assets lowering optimal leverage, investment returns do not follow a normal distribution raising optimal lecture, etc. In this setting several other effects manifest.
If existential risk is high, AI stocks are a better deal than modelled here - bankruptcy options are much more valuable with high variance assets. However, if nationalisation or political risk is higher if returns are high then returns are overestimated. Also, AI-induced technology could also introduce new goods, lowering effective η- if air conditioning is invented, you have something else valuable to spend higher levels of income on, so marginal utility declines less as you become richer - scaling investment choices proportionately.
Discount rates are probably correlated with AI returns, as mortality risk may be higher if the technology is coming along more quickly; discounting doesn’t affect optimal portfolios, but consumption could change. Very rapid increases in wealth could also increase asset holdings amongst the old (who have converted more of their human to financial capital), or increase asset holdings for those with lower discount rates so have saved more historically, and so could have systemic effects on future premia.
Lastly, these estimates don’t account for transition dynamics. To upper bound this, if (unpriced) mortality is 50% over the next 30 years, then annual mortality is 2.3%, implying a 30% price reduction. If society realised this over 10 years, returns fall by 4.8% - falling to close to bond levels. If covariance is negative, then optimal non-AI stock holdings will certainly remain positive; in the baseline broad specification stock holdings remain positive at 7.1%. Hoping to profit off of transition dynamics thus cannot justify shorting the market overall.
Conclusion
As Tyler Cowen argued in a later Marginal Revolution post, finance theory is indeed non-obvious, and there are returns to doing the sums. Those with higher appraisals of AI risk want to hold more non-AI stocks, not fewer, as non-AI stocks then provide a more effective hedge against the failure of the high-return AI firms - and in no cases modelled here do individuals wish to short the market. Higher discounting could justify higher consumption - but the faster rate of economic growth AI could enable does not, over the short term. Forseeing any potential apocalypse generally cannot make you much money in a way that you care about.
Discounting is the process of valuing money you will receive in the future. Being promised £100 in 5 years would be preferable to being promised £100 in 20 years, even beyond being able to save it etc. Discounting is how we represent this.
It does suppress stock prices (due to higher interest rates being applied on the same stream of profits) though - we consider this effect below.
There are other mechanisms that could potentially give some exposure to AI stock growth, such as providing power or cloud infrastructure for the labs; these are probably less effective at capturing value than owning the labs directly, and as the existing assets measured are already 5% of the stock market they will not be considered here.
This violates Merton’s model assumptions of constant returns over time - returns grow asymptotically to the growth rate of the highest component. Only considering returns over the next decade is roughly reasonable given the high stated discount rates of the individuals this post is modelling the rationality of though.
Why can the 2.2% figure be used, when the short-run fraction of profits captured is 7.8%? Other firm valuations will increase in response to greater expected future profits; as the 2.2% is the present value portion, the corresponding adjustment will be immediately reflected in other firms prices.
Individuals of that age still rationally hold savings, but primarily due to other credit market imperfections.
Cool thanks for writing this up!
One extension to bring this closer to people's model of AI risk is to have a time T where there's a chance that utility goes to zero (AI takeover) or a person gets a one-time utility boost from the capital they have left (i.e. their money buys them some amount of expected utility in a post-singularity future).
I think you would end up with a Merton-like portfolio for the time leading up to T with some cash saved for after the singularity.
This... assumes there is no consumption? Or that you can't spend money now to reduce risk later??
Overall I don't understand what's going on here.
> the optimal portfolio is independent of discounting - so high estimates of doom do not suppress stock holdings
I don't understand why. Could you say a bit more about the intuition here?