Last month, Gov. Maura Healey was joined by a virtual Sam Altman of OpenAI to announce, from the headquarters of a wearable AI device startup in Boston, that the state had signed a contract to bring ChatGPT to employees of the executive branch.
“Getting governments to lead the way and show how to use this technology, and where this technology is going to do things better, faster, more effectively, that seems really important,” Healey said, according to The Boston Globe. “I’d love it if this can be an example to the rest of the world.”
The announcement has been met with controversy. State Rep. Erika Uyterhoeven (D-Somerville), who among elected officials has been one of the most vocal critics of the deal, points to a lack of clear privacy protections in the contract and alleged conflicts of interest by the advisory board which recommended OpenAI. And Jonathan Cohn, policy director for Progressive Massachusetts, said the governor did not engage with state employees to gauge whether the tools would actually be helpful before committing to the OpenAI contract.
“She seemed proud to be the first state to do this, when no one was clamoring to do this,” he told The Shoestring.
But according to an investigation by The Shoestring, the state has been doing more than just making ChatGPT available to the approximately 40,000 employees of executive agencies, per Healey’s announcement. A 2024 report to the governor from her AI task force — made up of academic researchers, corporate representatives, and agency heads — envisioned the creation of an AI hub that enables open access to data and compute resources to academia, industry, and government alike. According to records reviewed by The Shoestring, that vision, which the Healey administration hopes will attract talent and investment to the state, is well underway.
After a year of experimentation with an “AI sandbox,” agencies are now beginning to roll out these tools for real. Records obtained by The Shoestring show that at least 40 state agency functions, many of which touch sensitive resident data, are now or may imminently be powered by AI.
That’s according to an internal survey of use cases collected by the Executive Office of Technology and Security Services (EOTSS), the state agency responsible for administering IT services. Some of these applications include summarizing benefits calls, processing Medicaid claims, use of Meta Ray-Bans as an accessibility tool in the chief medical examiner’s office, streamlining the document navigation process for department of transportation engineers, and answering questions for Mass.gov visitors.
Many others are unknown, and the state is saying little about how it is evaluating the possible risks to the residents these systems serve and the workers whose jobs they are reshaping.
Following months of negotiation between this reporter and EOTSS, the state ultimately withheld all details related to 31 out of 40 cataloged AI use cases, as well as cost and usage reports, and technical logs that would confirm the state’s claims that it has been responsibly handling sensitive constituent data.
Following an appeal, Manza Arthur, the state’s supervisor of public records, ordered EOTSS last month to make the records available for an “in camera” review by her office in order to determine if EOTSS’ refusal to provide records was legal under the state’s public records law. Under the law, EOTSS has ten business days to provide those records, and then the supervisor of records has up to 15 business days to review them and issue a determination.
What records the EOTSS did make available fall short of demonstrating the state’s stated commitments to data privacy. For example, of the nine items released, not one reported it had undergone a privacy impact assessment despite some of those tools being used to process social security numbers and Medicaid data. Fields in a spreadsheet for recording these assessments existed, but were left blank without explanation.
In an emailed statement, EOTSS spokesperson Christopher Smith did not directly answer The Shoestring’s question about the missing privacy assessment. Neither Smith nor Karissa Hand, a spokesperson for Healey’s office, responded to a follow-up email seeking clarification.
“As is the case with IT procurements throughout state government, TSS followed a rigorous, transparent procurement process to select the AI Assistant platform,” Smith wrote. “We entered this process with the goal of identifying an enterprise-grade solution that could support state employees, while centering our values of privacy, security, equity, and responsible innovation. Our standard operating environment for state IT systems ensures that our work advances those values.”
Smith said that the state’s AI assistant “operates within a walled-off, secure environment that protects state data and ensures that employee chat inputs do not train public AI models.”
“Use of the tool is governed by terms and conditions set by the TSS Privacy Office, and regularly updated policies that govern the use and development of AI,” Smith said.
***
When a Massachusetts resident calls the Department of Transitional Assistance about their SNAP benefits, an AI system could be listening, according to records sent to The Shoestring.
In December, the DTA rolled out a pilot program that transcribes SNAP eligibility calls in real time and generates a summary that caseworkers review and save directly into the caller’s benefits record. Those calls may touch on a SNAP recipient’s medical history, immigration status, or other personal information. DTA says that call summary data generated will be stored in state-owned systems and will follow existing access control and data retention policies.
The tool is designed to reduce call handle times, improve the consistency of case notes, and free caseworkers to focus on the conversation rather than notetaking, according to the EOTSS survey. After a call ends, the AI generates a structured summary that the caseworker can edit before it’s saved to the state’s BEACON eligibility system, which is the system of record for benefits administration. Callers are notified before they’re connected to a staff person and can opt out, according to DTA, which says about 400 calls have been processed through the system since it was first rolled out in December.
Full transcripts of calls aren’t saved, only the AI-generated summaries, according to technical documentation reviewed by The Shoestring. A spokesperson for DTA declined to share the AI prompt that generates the summaries, citing “the proprietary nature of the prompt development.” The tool was developed in collaboration with tech consultancy Accenture, which late last year laid off 11,000 of its own employees as part of an AI-focused restructuring.
Service Employees International Union Local 509, which represents around 9,000 state employees, including DTA call center workers, told The Shoestring that it did reach an agreement with DTA regarding the call summarizer tool. Natalia Berthet Garcia, a spokesperson for the union, said that its members are satisfied with that agreement because it will protect jobs and ensure use of the tool is voluntary.
“A huge number of calls go unanswered every day at DTA due to capacity,” Berthet Garcia said. “While this tool may be helpful, what will actually allow DTA to help more clients is hiring more staff to handle the work, which we are currently fighting for with our allies in the state budget.”
The deployment comes at a particularly sensitive time for the state’s benefits system.
Massachusetts is part of a 21-state coalition that has been litigating against the Trump administration’s demands for SNAP recipient data. The federal government, citing an executive order calling for “unfettered access to comprehensive data from all State programs that receive federal funding, including, as appropriate, data generated by those programs,” has threatened to withhold administrative funding from states that don’t hand over detailed personal information on every SNAP recipient, including names, Social Security numbers, home addresses, and immigration statuses dating back to 2020.
Courts have blocked the demands twice. U.S. District Judge Maxine Chesney in California first issued a preliminary injunction in 2025, finding that the U.S. Department of Agriculture’s proposed data-sharing protocol was unlawful because it would allow the data to be shared with agencies unrelated to SNAP administration, including, potentially, for immigration enforcement.
When USDA attempted to circumvent that order — sending new letters to states in November and December 2025 threatening to cut off administrative funding unless they complied with what it called updated security protocols — the coalition went back to court. On Feb. 26, Chesney issued a second order enforcing the original injunction and barring USDA from penalizing noncompliant states while the lawsuit proceeds.
Whether AI-generated call summaries could be subject to federal data demands is unclear. Trump’s executive order covers “data generated by” federally funded state programs.
A spokesperson for Attorney General Andrea Joy Campbell’s office, which has co-led the legal challenge to federal demands for SNAP recipient data and has separately issued formal guidance on AI compliance with the state’s data protection law, declined to comment.
***
Several of the projects The Shoestring identified handle sensitive data.
In addition to the DTA call summarization tool, MassHealth is piloting a similar system, built by Accenture, for its third-party liability call center, which will process personally identifiable information, including data regulated by the Health Insurance Portability and Accountability Act, or HIPAA. That project is hosted on Accenture’s servers, according to records sent to The Shoestring — not the state’s “sandboxed” environment that it says is intended to keep its data safe.
Other disclosed projects include a MassHealth chatbot that helps eligibility workers answer member questions, built in part by Northeastern University students; an RMV virtual assistant that answers licensing questions on Mass.gov using publicly available content; a MassDOT chatbot that helps engineers navigate technical specifications; a document-processing tool for paid family and medical leave claims; a mentor-mentee matching system at MassDOT; and an Otter AI closed-captioning tool for police training videos at the Municipal Police Training Committee.
The projects that handle the least sensitive data tend to be the most carefully documented, but it’s unclear if this is by design or coincidental. Comprehensive technical details for a public-facing RMV virtual assistant entry, for instance, can be found in a Github repository published by the Burnes Center at Northeastern University, which partnered with the state to help develop some AI applications.
By contrast, the DTA call summarization tool — which processes Social Security numbers and medical information — lists its interaction data plan as something to “be defined during the requirements gathering and design phases,” despite already being in production. Like the MassHealth tool, DTA’s call center assistant was built by Accenture.
***
In February, the Executive Office of Health and Human Services issued a request for information, or RFI, for its Medicaid third-party liability program, which identifies other insurance that should pay medical bills first. The request included details on how vendors are permitted to use AI in fulfillment of the contract that indicate custom training or “fine tuning” of AI models for the task is permitted.
A draft contract attached to the RFI does not forbid training AI on MassHealth data — a provision that the state’s contract with OpenAI does have. Instead, it is structured around the expectation that vendors will refine their models on this data. So, to prevent sensitive data from leaking, the state proposed a requirement to return or destroy any processed data following the end of the contract.
But while the contract is specific about what happens when a vendor leaves, it says little about what happens while they’re working. Its fairness requirements ask only that vendors take “reasonable actions” to ensure AI systems are, “to the extent reasonably possible, free of harmful biased and discriminatory results.”
The MassHealth procurement envisions one vendor training on one dataset. But another arm of the Healey administration is building a platform designed to make state data available for AI training far more broadly.
In August 2025, the Massachusetts Technology Collaborative, or MassTech, issued a request for proposals on behalf of the state’s AI Hub for a “Data Commons Collaborative.” The platform would allow users to generate artificial datasets modeled on real resident data, a technique meant to preserve the statistical patterns in sensitive records while stripping out identifying details. Healthcare is listed as a top priority. Companies that train models on the platform would keep the intellectual property.
MassTech has not defined the validation criteria for ensuring those synthetic data copies actually protect the people they’re based on, leaving much of this work up to the AI companies pitching to run the program for the state.
Vendors submitted more than 340 questions in response to the request for proposals. This question-answer phase is a normal step in most government procurement, but the volume of questions is notable, as are the state’s answers. When one vendor asked about “privacy-utility tradeoffs,” MassTech said the system should be “configurable,” meaning privacy protections could be dialed up or down depending on how useful they want the data to be.
“Vendors are encouraged to propose recommended approaches, metrics, and safeguards based on best practices and compliance considerations,” another answer said.
A spokesperson MassTech told The Shoestring that “at this time, the Data Commons initiative led by the MA AI Hub is undergoing additional review.”

