EPISODE INFO
HOST: Jennifer Smith
GUESTS: Jason Snyder, Massachusetts Secretary of Technology
WHEN GOV. MAURA HEALEY announced in February that Massachusetts would become the first state to deploy a ChatGPT-powered AI assistant across the entire executive branch, the news was framed around efficiency: 40,000 state employees getting a tool to help them work faster.
But the mechanics of how the tool will work are keyed into a discussion raging across labor and data privacy sectors, plus drawing the eyes of lawmakers. It’s also raising questions around what data is captured and how the program itself is trained.
For an AI assistant to be useful to a caseworker at the Department of Children and Families, or a benefits processor at MassHealth, or a clerk at the Registry of Motor Vehicles, it needs access to the data those workers handle. Tax records. Medical histories. RMV files. Housing and rental information. Benefits documentation.
The Commonwealth holds some of the most sensitive personal information of its more than 7 million residents — and under the Healey administration’s three-year, roughly $4.3-million-a-year contract with OpenAI, all of that is now in the orbit of a private AI system.
Technology Secretary Jason Snyder talked with CommonWealth Beacon reporter Jennifer Smith on The Codcast to explain why the administration thinks that’s the right call.
“I think it starts with this idea that AI is here,” he said. “It’s not going away. There’s no technologies that come in and then disappear.” From the administration’s vantage point, he said, “what we’re looking at first and foremost is training everybody in [AI] use. It’s an educational focus. It’s this idea that the way that we really focus on building our workforce is through training our workforce in AI.”
The fiscal logic is straightforward. State agencies are stretched thin while demand for services keeps climbing, and the administration sees AI as a way to close that gap without adding headcount. The various agencies that deal with regulated data would have custom, closed workspaces dedicated to their specific needs, according to Snyder.
The administration has been building toward this for a while.
The FutureTech Act, signed in 2024, put $25 million toward AI projects across state government. The Massachusetts AI Hub, backed by $100 million from the Mass Leads Act, was set up at MassTech to coordinate between government, industry, and academia. Student cohorts from Northeastern University and UMass Amherst have been embedded in state agencies, building tools that are already live: an RMV virtual assistant that has handled more than 200,000 customer inquiries, a MassDOT infrastructure tool, a complaint-processing system at the Department of Elementary and Secondary Education designed to speed up services for students with disabilities, and a new K-12 AI curriculum pilot running in 30 school districts.
On data security, Snyder says the architecture is layered. The state’s ChatGPT environment is walled off from OpenAI’s public training models, he said. Employee prompts stay within the state’s system and aren’t visible to other users – including managers.
“As a manager, if one of my employees is using AI, I don’t have access to their queries and prompts,” Snyder said. “They get to keep that and retain that personally.” A separate HIPAA-compliant workspace is in development for agencies handling regulated health data, he said, and the state conducts regular technical reviews to verify the contract protections are holding up in practice.
OpenAI in particular is being pushed out across the federal government. The company is now partnered with the Pentagon, and the US Department of Health and Human Services told employees to begin using ChatGPT.
Contractually, “OpenAI is obligated to ensure that our data remains in Massachusetts,” Snyder said, and “if this data were to leave, it would be bad for OpenAI. It would be a public embarrassment. Our data is prohibited from being used to train AI models, and it’s not accessible by other ChatGPT users. It’s not accessible by even other ChatGPT users in the same office. Our data is protected.”
On Beacon Hill, several bills in the current session are looking to slow the speeding AI train until clearer regulations are in place.
The FAIR Act, which is now before the House Ways and Means Committee, would limit employer use of AI for surveillance and automated decisions affecting hiring, pay, and job status. It also includes a provision that could throw a wrench into further state AI expansion. One provision bars any state agency from procuring or using an “automated decision system” unless that use is “specifically authorized in law.”
Massachusetts AFL-CIO president Chrissy Lynch, whose organization has been among the bill’s most vocal backers, laid out labor’s concerns in September. “Working people aren’t buying Big Tech’s promise of a shiny AI future that will solve all of our problems,” she said in a statement. “While technological advances can improve our lives in many ways, we need real guardrails around the use of AI and other technology on the job so that working people aren’t left in the dust.”
An assortment of other bills is also in play. One, referred to the Senate Ways and Means Committee, would bar AI from making independent therapeutic decisions in behavioral and mental health settings. The Massachusetts Data Privacy Act, passed by the Senate last fall and awaiting House action, would give residents the right to access and delete their personal data and would ban the sale of sensitive information including health and geolocation data. Another Senate bill would require state agencies to audit AI tools for discriminatory impact and give individuals a legal path if those audits come up short.
Even as the Healey administration plunges ahead with AI investments, Snyder said officials are trying to be clear-eyed about the technology’s limitations and staff concerns. No employee will be required to use the AI assistant, he said, and the technology department is alert to reports of bias or reliability issues.
“In all cases within Massachusetts, we have a ‘human at the helm’ approach to using AI,” he said. “In every case, there will be a human reviewing it, there’ll be a human interacting with it prior to any public distribution of it. So AI may be used by somebody to help create it or maybe create a framework for it. But the actual usage of it will always be controlled by that person. And so that’s essential, because whatever is being posted, whatever is being provided, ultimately has a person who’s responsible for it, not AI.”
On the episode, Snyder discusses the types of AI the state is exploring (1:45), its approach to data privacy (6:00), and how the state hopes AI tools interact with employee roles and skills (19:30).

