Skip to main content

Universal Basic Agency

Beyond Basic Income: Empowering People in the Age of AI Agents

Universal Basic Income addresses financial security, but a new frontier of inequality is emerging that threatens democratic governance and economic opportunity far more insidiously than simple wealth disparities. As autonomous AI systems become capable of complex planning and independent action, those who control multiple high-quality agents will accumulate advantages far exceeding those without such access. We need Universal Basic Agency—ensuring every citizen receives baseline autonomous AI assistance as a fundamental public service, just as we guarantee access to education, infrastructure, and emergency services.1

What is Agentic Inequality?

“Agentic inequality” describes disparities in power and opportunity arising from unequal access to AI agents.1 Unlike traditional software tools that simply augment human ability—calculators making arithmetic faster, word processors making writing easier—AI agents function as autonomous delegates capable of complex planning, independent action over extended timeframes, and even negotiating with other agents on behalf of their owners.1 This represents a qualitatively different form of technological capability, one that delegates agency itself rather than merely enhancing existing human capacities.

This new category of inequality manifests across three distinct dimensions that compound one another exponentially.1 The first dimension concerns availability: the basic divide between those who have access to agents and those who do not. The second addresses quality disparities in agent capabilities, encompassing differences in intelligence, processing speed, reliability, access to tools and databases, and behavioral sophistication.1 The third dimension involves quantity: the ability to deploy multiple coordinated agents working in parallel, multiplying effective power through simultaneity.1 Someone controlling dozens of high-quality agents possesses advantages that dwarf single-agent access by orders of magnitude, creating a new form of extreme inequality that traditional redistribution mechanisms cannot address.

The Economic Stakes: Concentration and Consumer Exploitation

The economic implications of agentic inequality extend far beyond simple job displacement. AI agents are already transforming how value is created and captured in the economy. Google now generates approximately twenty-five percent of its code using AI agents, demonstrating the production capabilities these systems already possess.2 As these systems become more capable, we face accelerated wealth concentration as capital owners deploy agent swarms to capture larger shares of economic value while workers find themselves displaced without the resources to deploy competing agent labor.1

The rise of superstar firms—companies with superior agent capabilities that dominate markets and crush competitors—threatens competitive markets themselves.1 When a handful of corporations control the most sophisticated agent infrastructure, they gain decisive advantages in every domain from research and development to customer acquisition to regulatory compliance. Smaller competitors without comparable agent resources cannot keep pace, leading to winner-take-all dynamics that concentrate economic power in fewer hands.

Perhaps most immediately concerning is the threat of systematic consumer exploitation as corporate agents negotiate against individuals’ less-capable or nonexistent personal agents.1 Research on AI-mediated negotiations reveals that deal-making with large language model agents in consumer settings creates inherently imbalanced outcomes, with different AI agents demonstrating large disparities in obtaining favorable terms for their users.3 Stronger agents can exploit weaker ones to secure better deals, with buyers using weaker agents tending to pay approximately two percent more compared to scenarios where agents possess equal capabilities.3 Companies claiming superiority of their agents present a tempting proposition for powerful clients who can impose their choice of agents on smaller counterparts, magnifying power imbalances.4

Imagine trying to negotiate a mortgage while a bank deploys hundreds of sophisticated AI agents analyzing your finances, optimizing their offer to maximize profit while minimizing your understanding of alternatives. Imagine disputing a medical bill when the insurance company’s agents can process millions of claims data points to identify exactly which denials you are least likely to successfully appeal. Imagine challenging an employer’s decision when their legal agents can instantly research precedents and craft responses while you struggle to even understand your rights. Without personal AI agents of comparable capability, individuals become systematically disadvantaged in every consequential transaction.

The Political Power Dimension

Economic inequality translates directly into political inequality, and agentic inequality threatens democratic governance at its foundation. Those controlling powerful agents possess unprecedented capabilities to shape political discourse and influence government.1 They can flood political discourse with agent-generated content across social media platforms, drowning out organic voices and manipulating information ecosystems through coordinated agent networks.1 They can automate lobbying and influence campaigns at scales previously impossible, with AI agents drafting personalized communications to legislators, generating astroturf advocacy, and identifying precisely targeted pressure points.1

The advantage extends to basic government services, with those possessing sophisticated agents navigating bureaucracy faster and more effectively than those without such assistance.1 When government websites implement AI systems, when regulatory compliance requires processing complex documentation, when benefit applications demand navigating labyrinthine procedures, those with capable personal agents succeed while those without face systematic disadvantage. The state itself becomes preferentially accessible to those who can afford the best agents.

The United Nations Conference on Trade and Development’s Technology and Innovation Report for 2025 predicts that artificial intelligence could affect up to forty percent of jobs worldwide through automation and job displacement.5 This massive disruption will generate intense political contestation over responses—debates over taxation, social insurance, retraining programs, and fundamental questions about the social contract. Those equipped with sophisticated AI agents to research, advocate, organize, and influence will shape outcomes to their advantage while those without such resources struggle to even comprehend the policy landscape.

Without intervention, AI agents will amplify existing power structures rather than democratize capability, entrenching inequality across both economic and political dimensions.1 The technology that could liberate humanity from drudgery may instead create new forms of domination as stark as feudalism.

Congressional Response: The Algorithmic Accountability Act

Congress has begun responding to the dangers of unaccountable automated decision-making systems, though legislation remains far short of what Universal Basic Agency requires. On September 19, 2025, Congresswoman Yvette D. Clarke introduced the Algorithmic Accountability Act of 2025 in the House of Representatives, with Senator Ron Wyden sponsoring the companion Senate bill.6 The House version secured twenty-one bipartisan cosponsors and endorsements from organizations including Color Of Change, the Consumer Federation of America, and AI For the People.6

The legislation would require large companies to evaluate how their automated decision systems affect people and to disclose when and how they deploy algorithmic systems in critical areas affecting Americans’ lives.6 The bill addresses automated systems that perpetuate bias in high-impact domains including housing, employment, credit, and education, tackling concerns that vulnerable populations face discriminatory outcomes when corporations delegate consequential decisions to algorithms prone to prejudice.6 The Algorithmic Accountability Act mandates impact assessments for automated decision systems used in augmented critical decision processes, requiring companies to check their systems for bias and explain how they work to prevent discrimination in jobs, loans, housing, and other opportunities.7

The Algorithmic Accountability Act represents important progress toward transparency and accountability, but it addresses only corporate and institutional deployment of AI systems. It does not guarantee that individuals possess the agent capabilities needed to effectively exercise the rights it creates or to negotiate on equal footing with corporate agents. Knowing that an algorithm affects you provides little protection if you lack the sophisticated AI assistance necessary to challenge discriminatory outcomes, negotiate better terms, or pursue alternative options.

Universal Basic Agency: A Solution Framework

Universal Basic Agency means every citizen receives baseline autonomous AI assistance, analogous to how Universal Basic Income provides financial security or how public education provides baseline knowledge capabilities.1 This represents not merely a welfare program but a fundamental infrastructure investment ensuring that agent capability—like literacy, numeracy, and internet access—becomes a public good rather than a luxury commodity.

The policy could take several complementary forms. Direct government provision would mean the federal government provides every citizen with access to a basic AI agent capable of navigating government services and bureaucracy, managing personal finances and reviewing contracts, researching healthcare options and disputing medical bills, assisting with job searches and applications, and protecting consumer rights in automated negotiations.1 These agents would function as personal advocates, ensuring that complex systems remain accessible and that individuals possess the capability to effectively exercise their rights.

Subsidized private access represents an alternative or complementary approach, where the government subsidizes or regulates access to ensure everyone can afford quality AI agent services, similar to telecommunications universal service obligations that require affordable phone service in rural areas.1 This market-based approach could leverage private sector innovation while ensuring baseline access through public funding and regulatory requirements preventing discriminatory pricing or service denial.

Open public infrastructure investment would fund the development of open-source agent platforms that anyone can use, preventing corporate lock-in and ensuring baseline capabilities remain accessible regardless of ability to pay.1 This approach mirrors successful public infrastructure like the internet protocol stack, GPS satellites, and research databases—technologies developed with public investment that created enormous private sector value while remaining universally accessible. An open-source agent platform would enable developers, nonprofits, and civic organizations to build specialized agents serving particular communities or needs without requiring permission from or payment to proprietary platform owners.

Consumer Reports has begun exploring the potential for developing pro-consumer AI agents that prioritize user interests above all else, recognizing that market-provided agents may face inherent conflicts of interest when companies profit from user ignorance or disadvantage.8 A truly effective Universal Basic Agency program would likely combine elements of all three approaches—direct government provision for essential services, subsidized private access to leverage innovation, and open infrastructure enabling civic and nonprofit participation.

The Economic Case: AI Inequality Research

Recent research reveals competing dynamics in how AI affects economic inequality, with outcomes depending critically on who controls agent capabilities. An April 2025 working paper from the International Monetary Fund finds that AI could reduce wage inequality by primarily disrupting high-income jobs, unlike previous automation waves that disproportionately affected middle-skill workers.9 However, this optimistic projection depends on diffuse AI access rather than concentrated agent control. Other research shows that higher-income workers are more likely to experience productivity boosts from AI, with exposure to productivity gains concentrated at the higher end of the income distribution, peaking around ninety thousand dollars per year and remaining high for six-figure salaries.10

The resolution of these contradictory findings depends on whether AI remains a widely accessible productivity tool or becomes concentrated as proprietary agent capability. Research on wealth distribution effects finds a temporal dichotomy: in the short term, AI exacerbates disparities in wealth distribution, while long-term outcomes depend on the extent of AI’s influence across different technological domains.11 Elevated market concentration, as the generative AI market becomes increasingly dominated by a small number of large businesses, will tend to generate higher markups and result in a growing fraction of productivity gains going to corporations rather than workers.10

Generative AI is estimated to contribute between one point seven trillion and three point four trillion dollars in global economic growth over the next decade, but this growth could worsen income inequality and widen global disparities if benefits flow primarily to agent-controlling capital owners rather than workers.10 Global inequality in AI capability appears particularly stark: in 2023, the United States alone secured sixty-seven point two billion dollars in AI-related private investment—eight point seven times more than China—allowing the United States to lead in AI innovation with sixty-one notable AI models produced that year.12 The concern extends beyond inequality between workers and capital within countries to inequality between nations, with AI potentially amplifying existing development gaps as wealthy nations monopolize transformative technology.12

Universal Basic Agency represents a policy intervention designed to prevent concentration of agent capabilities and ensure that AI productivity gains benefit everyone rather than accruing solely to those who already possess wealth and power. By guaranteeing baseline agent access, we create conditions for AI to reduce rather than exacerbate inequality.

Policy Implementation Framework

Implementing Universal Basic Agency requires systematic attention to measurement, standards, investment, and legal frameworks. We must first develop metrics to track who possesses access to what quality and quantity of agents, making agentic inequality visible and measurable just as we track income inequality through Gini coefficients and wealth concentration through distribution analysis.1 Without measurement, we cannot assess whether policies effectively address disparities or whether inequality continues growing invisibly.

We must use participatory democratic processes to define acceptable limits on agentic inequality, determining what levels of disparity society will tolerate before intervention becomes necessary.1 This normative question parallels debates over acceptable income inequality, except that agent inequality affects capability and power more directly than mere wealth, making extreme disparities potentially incompatible with democratic equality.

We must mandate technical standards requiring interoperability and open protocols to prevent platform lock-in and ensure that agents can work across systems.1 Proprietary agent ecosystems that trap users create switching costs that undermine competition and enable exploitation. Open standards ensure that individuals can choose or change agent providers without losing accumulated data, learned preferences, or functional capabilities.

We must invest substantially in public options, funding development of government-provided or open-source AI agent infrastructure that establishes baseline capabilities independent of private sector provision.1 This investment parallels the public infrastructure that enabled private innovation—the interstate highway system, GPS satellites, the internet itself—recognizing that some capabilities are too fundamental to leave entirely to market provision.

Finally, we must adapt legal frameworks to the reality of agent-mediated interactions, establishing fiduciary duties for agents to prioritize user interests, liability rules clarifying responsibility for agent actions, and consumer protections preventing exploitation in agent negotiations.1 The California Privacy Protection Agency has proposed national standards for regulating Automated-Decision Making Technology, and similar frameworks must extend to personal agents ensuring they serve users rather than third-party interests.13

The Connection to Universal Basic Income

Universal Basic Agency complements rather than replaces Universal Basic Income. UBI addresses material needs by providing financial security; UBA addresses power and capability by providing agent assistance. Together, they ensure that AI and automation serve everyone rather than concentrating benefits among those who already possess wealth and power.1

In the age of AI agents, economic security without economic agency leaves people financially stable but fundamentally powerless. A citizen receiving UBI payments but lacking agent assistance remains vulnerable to exploitation in every transaction, disadvantaged in every bureaucratic interaction, and excluded from effective political participation. Conversely, providing agent access without economic security fails to address material deprivation that prevents people from benefiting from enhanced capabilities. The combination proves necessary: UBI ensuring everyone has resources to live, UBA ensuring everyone has capabilities to thrive.


References

  1. Sharp, M., Bilgin, O., Gabriel, I., & Hammond, L. (2025). “Agentic Inequality.” arXiv:2510.16853. Retrieved from https://arxiv.org/html/2510.16853v2  2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

  2. USAII. “Can Universal Basic Income (UBI) Be A Sustainable Response to The Rise of AI Agents?” Retrieved from https://www.usaii.org/ai-insights/can-universal-basic-income-ubi-be-a-sustainable-response-to-the-rise-of-ai-agents 

  3. Gaitsgory, A., Aher, G. V., & Hadfield-Menell, D. (2025). “Towards Fair and Trustworthy Agent-to-Agent Negotiations in Consumer Settings.” arXiv:2506.00073. Retrieved from https://arxiv.org/html/2506.00073v1  2

  4. INSEAD Knowledge. “The Power of AI to Shape Negotiations.” Retrieved from https://knowledge.insead.edu/strategy/power-ai-shape-negotiations 

  5. eWeek. “AI Boom Risks 40% of Jobs, Deepens Inequality — UN Report.” Retrieved from https://www.eweek.com/news/un-ai-report-inequality-jobs/ 

  6. Rep. Clarke, Y. “Clarke Introduces Bill to Regulate AI’s Control Over Critical Decision Making in Housing, Employment, Education, and More.” Retrieved from https://clarke.house.gov/clarke-introduces-bill-to-regulate-ais-control-over-critical-decision-making-in-housing-employment-education-and-more/  2 3 4

  7. Congress.gov. (2025). “S.2164 - 119th Congress (2025-2026): Algorithmic Accountability Act of 2025.” Retrieved from https://www.congress.gov/bill/119th-congress/senate-bill/2164/text 

  8. Consumer Reports Innovation. “Empowering Consumers with Personal AI Agents: Legal Foundations and Design Considerations.” Retrieved from https://innovation.consumerreports.org/empowering-consumers-with-personal-ai-agents-legal-foundations-and-design-considerations/ 

  9. International Monetary Fund. (2025, April). “AI Adoption and Inequality.” IMF Working Paper. Retrieved from https://www.imf.org/en/Publications/WP/Issues/2025/04/04/AI-Adoption-and-Inequality-565729 

  10. Brookings Institution. “AI’s impact on income inequality in the US.” Retrieved from https://www.brookings.edu/articles/ais-impact-on-income-inequality-in-the-us/  2 3

  11. PMC. “Analyzing wealth distribution effects of artificial intelligence: A dynamic stochastic general equilibrium approach.” Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC11786846/ 

  12. Center for Global Development. “Three Reasons Why AI May Widen Global Inequality.” Retrieved from https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality  2

  13. National Law Review. “Understanding Agentic AI and its Legal Implications.” Retrieved from https://natlawreview.com/article/intersection-agentic-ai-and-emerging-legal-frameworks