DRAFT (NOT YET READY): On Designing Mechanisms for Shared Prosperity under AI

10 min read
Mechanism DesignAIEconomicsPolicy

When I was on the cusp of graduating from Harvard three years ago and sat down to write my commencement address, a number of my recent courses were on my mind — Stephen Marglin’s technical extension of Keynes’ General Theory on the role that states might play in encouraging economic prosperity and limiting economic pain; Ash Carter’s argument that, faced with oncoming technology dilemmas, states have a responsibility both to accelerate technologies that benefit their people and to design regulation for those that might harm them; and Dan Shapiro’s storied corpus of simulations on negotiation, which demonstrated how you can arrive at shared truths across otherwise divergent interests.

My last semester felt like a parting gift in competent governance — how do you make difficult decisions that affect other people deeply and which don’t have the luxury of ignoring trade‑offs? How do you make those decisions knowing that they might define the path your society takes indefinitely into the future? How do you establish the parameters that define whether your decisions are good, and whether you can change them after the fact?

These are heavy questions. Certainly, there’s a science to making optimal decisions in governance, but at the end of the day, some decisions are rightfully human and political and unique to their time. They deal with the art of transforming collective interests into concrete agreements and policies while navigating deep uncertainty and disagreement about what is to come.

This was difficult for me. I think every ambitious young person, and certainly the majority of my classmates at Harvard, fashions himself a competent leader who can make difficult decisions, perhaps even selecting the best path in a vast world of them. But until then, I had never made decisions that affected “society.” Mine had related largely to my own interests and well-being on a well-traveled path, which I understood very well and could adjust as needed to correct any wrong turns I’d taken.

Upon graduating, the path becomes less clear and much more serious. You find fewer people who have walked the same corridors. You no longer have the benefit of low-stakes experimentation and debate within the ivory tower. You no longer have the unconditional right to make decisions based exclusively on your curiosity without considering how they might affect your very survival or the well-being of those you care about.

You acquire a newfound sense of discretion and agency that makes any decision you make much more… real.

I predicted this change before graduating. And I sensed that, for me and perhaps for others like me, these decisions would feel like a thin ledge upon which you try to stand, teetering back and forth, wondering which side to land on.

I gave myself a piece of advice, which I also included in my address as a sort of predictive gift to my class: do not fear uncertainty, for you have a strong foundation that you can trust to guide you to the right answer. Control what you can, limit the plausible downsides of what you cannot, and make decisions that you can test. Perhaps in somewhat more flowery language, but this was the gist.

We are nearing another one of these cusps.

"We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come — namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour."
— John Maynard Keynes, Economic Possibilities for Our Grandchildren (1930)

Nearly a century later, Keynes’s “new disease” has returned with unprecedented force. In the aftermath of a post‑bull‑market boom eerily reminiscent of the 1920s, the advent of generative AI has thrust the specter of technological unemployment back to center stage. Whereas Keynes viewed displacement as a “temporary phase of maladjustment” on the road to abundance, today’s human‑simulating models demand a fresh reckoning.

The nature of work is shifting faster than at any time in recent memory. We are at an important crossroads for labor markets that have the potential to define the sort of world we share and how we will spend our time. Arguably, we have not seen widespread automation or ‘AGI’ through LLMs quite yet, but already in 2025, there have been mass layoffs across a number of high‑paying sectors.

Microsoft alone announced plans to cut approximately 9,000 roles this summer, on top of around 6,000 in May. Meanwhile, the unemployment rate for recent college graduates jumped to 5.8 percent in Q1 2025 versus 4.2 percent overall.

This double squeeze — mass displacement of seasoned workers alongside shrinking entry‑level prospects — has fueled widespread disillusionment that echoes the building blocks for a new generation’s participation in a long history of labor movements, but the scale of this iteration feels different. Never before have we seen the sort of general AI that can categorically simulate human thought and logic, and we’re seeing the lower‑skilled base of the employment pyramid start to erode.

One visible symptom is the anti‑work movement, which emerged from the ashes of COVID‑era labor pressure. While the anti‑work movement channels that frustration, it also sounds a broader alarm: unless we reset the terms of our social contract, automation and market turbulence will deepen inequality, erode worker agency, and threaten social cohesion.

Why Mechanism Design?

To reset these important terms of the social contract, we need more than ad hoc fixes that attempt to persuade concentrated powers to share on philosophical grounds. We need a science of institutional design that aligns individual behavior with societal goals. That’s the promise of mechanism design, often called “reverse game theory.” Unlike standard game theory, which predicts outcomes from existing rules, mechanism design starts with desired ends and works backward to craft rules that compel self‑interested agents to achieve them. Nobel laureates Hurwicz, Maskin, and Myerson formalized this field and won the 2007 Nobel Prize in Economic Sciences for showing how to align private incentives with public goals.

A Mechanism for Our Collective Reckoning with AI

Let’s establish two core parameters for desirable outcomes:

  • Productive Automation: We want enough automation to make our economy measurably more productive and globally competitive.
  • Compensated Displacement: We don’t want wholesale worker displacement without sharing the gains from lost labor.

To achieve these, any mechanism must ensure:

  • AI firms retain strong incentives to innovate and produce ever‑better models.
  • These firms choose not to relocate to more lenient jurisdictions.
  • Workers maintain a proportional stake in automation’s gains.
  • Markets continue to function to surface people’s demands and preferences.

The Proposal: Broad Ownership Tied to Displacement

Each year, every member of society receives a non‑transferable equity stake in each AI/model provider equal to the economy's net labor displacement. That is, figure out which jobs are being fully automated without creating new jobs, and compensate the labor pool for those net losses. This can be assessed either via O*NET task mappings and LLM-level audits or through macroeconomic analysis. Voting power remains modest; dividend rights scale directly with the displacement rate.

note: I want to figure out the risk of collective ownership over model providers and not application-level firms. if they're one and the same, this seems to work well. if they're different, it might be the case that application-level firms are still incentivized to automate as much as possible. In that case, we might include some provision for application-level firms beyond a certain size -- the way China does things. Allow start-ups to innovate, and once they become integral parts of the economy, introduce some public accountability through shared ownership, which allows them to retain their market mechanisms.

By linking ownership directly to displacement, we can easily imagine what the incentive structures look like:

  • If a model provider's technology leads to 30 percent labor displacement, 30 percent of its dividends flow to workers and the public, while firms retain the majority upside when their technology contributes to augmentation over automation.
  • If a model provider's technology leads to a lower 10 percent labor displacement, only 10 percent of dividends go to workers/the public, preserving 90% upside for the firm. This is equivalent to a low-dilution funding round and encourages modest, targeted automation without chilling innovation.
  • If a model provider's techology is being used to displace an extreme 90 percent of labor, 90 percent of dividends go to workers/the public who now no longer have work, leaving only 10 percent upside to the firm. This is an extreme situation with an extreme but appropriate solution, and it strongly disincentives pure replacement and steers firms toward augmentation strategies.

Thus,

Preventing Perverse Incentives

Critics worry firms will relabel automation as “augmentation” to dodge sharing. We counter with:

  • O*NET Analytics: Objectively map task shifts year over year.
  • Wage & Audit Cross‑Checks: Detect hidden automation through spot audits.
  • Independent Oversight: A semi‑autonomous council of economists, technologists, and labor reps validates data and adjusts formulas.

Aligning Innovation and Competitiveness

Will this throttle R&D or provoke capital flight? History says no:

  • Market Advantages: Deep talent pools and venture networks still attract innovators.
  • Global Coordination: Parallel adoption by advanced economies levels the field.
  • Tax Incentives: High baseline AI taxes can be offset by deductions for displacement dividends—mirroring ESOP treatment.

Empirical Foundations & Dynamic Feedback

We apply a truly scientific approach:

  1. Baseline Measurement: Pre‑pilot displacement benchmarks across sectors.
  2. Iterative Adjustment: Annual share updates and minor calibrations to guard against gaming.
  3. Policy Complementarities: Fund reskilling, transition assistance, and universal services (health, housing, education) via displaced equity flows.

Extending Beyond Digital Models

While focused on LLMs, this principle scales to:

  • Physical automation in manufacturing, logistics, and agriculture.
  • Basic‑needs systems—food, water, energy, shelter.
  • Global income sharing as remote work and cross‑border labor expand.

The Broader Social Contract

This mechanism complements a Keynesian framework:

  • Brake & Accelerate: Public policy should curb pure displacement while fast‑tracking augmentative AI.
  • Redistribution via Ownership: Equity stakes replace purely fiscal transfers with tangible shares in productivity.
  • Democratic Renewal: Embedding ownership in individuals revives agency and stake in technological progress.

Conclusion & Call to Action

We stand at a cusp as profound as that commencement day. Rapid layoffs, a broken entry‑level market, and rising anti‑work sentiment demand a reset. By distributing ownership in AI/model providers at rates pegged to labor displacement, we align incentives, preserve innovation, and ensure automation’s bounty flows to those it replaces.

Next steps: pilot displacement studies, convene policymakers, technologists, labor unions, and civic leaders, and establish a robust oversight body. We can design, test, limit downsides, and refine—transforming automation from threat into shared prosperity.