As generative artificial intelligence (Gen AI) continues to permeate various sectors, government agencies are increasingly looking to leverage this technology while ensuring the protection of public interests. The latest conference by the International Association of Privacy Professionals (IAPP) in Boston, attended by over 1200 lawyers, technologists, and policy advisors, delved into this pressing issue.
Embracing AI with Caution and Responsibility
The rapid adoption of Gen AI tools in government operations raises crucial questions about their effective use and the potential risks involved. In Connecticut, for instance, a new state legislation mandates an inventory of all Gen AI and automated decision-making tools by the end of the year, a step towards comprehensive AI policy and an AI Bill of Rights in 2024.
Cities like San Jose, California, and Seattle are also taking proactive measures. San Jose requires approval from the city’s digital privacy office for algorithmic tools, while Seattle mandates approval from the purchasing division. San Jose further maintains a public-facing algorithm register, offering transparent explanations of AI tools and their applications.
Setting Core Values for AI Utilization
Balancing innovation and regulation is critical. Maine’s decision to pause Gen AI use by state agencies for six months highlights this challenge. Conversely, Pennsylvania’s approach, under Gov. Josh Shapiro’s executive order, emphasizes ten fundamental values guiding AI application in state operations, focusing on empowerment, mission fulfillment, and privacy protection.
Municipalities are also stressing accountability in AI use. For example, Boston’s interim guidelines mandate employees to fact-check AI-generated content and disclose AI usage. Similarly, Santa Cruz County and Boston emphasize protecting resident privacy and treating AI prompts as public records.
Legal Industry: AI’s Impact on Access to Justice
AI’s role in legal interpretation is a topic of intense debate. While AI tools could democratize legal services, making them more accessible and affordable, risks like inherent bias and the need for nuanced judgment in legal advice remain. The legal industry, as suggested in the Vanderbilt University’s Journal of Entertainment and Technology Law, might benefit from initially applying AI in more stable law fields.
The State of Utah’s innovative legal services sandbox, the Office of Legal Services Innovation, is a pioneering example of promoting legal AI tools. This initiative supports unique legal service platforms while ensuring consumer protection, audited monthly for potential harm.
However, the American Bar Association’s restrictions on non-attorney ownership in law firms present a barrier to AI-driven legal innovation. While intended to protect attorney independence, this rule limits collaboration between legal and tech sectors, crucial for AI advancement in legal services.
As AI continues to evolve, government agencies must navigate the fine line between harnessing its potential and ensuring public safety and trust. With careful policy-making, thorough auditing, and a focus on ethical AI use, government agencies can effectively integrate Gen AI into their operations, enhancing service delivery while upholding public interests.