Article content
HARTFORD (AP) — The Connecticut Senate pressed ahead Wednesday with one of the first major legislative proposals in the U.S. to rein in bias in artificial intelligence decision-making and protect people from harm, including manufactured videos or deepfakes.
The vote was held despite concerns the bill might stifle innovation, become a burden for small businesses and make the state an outlier.
Article content
The bill passed 24-12 after a lengthy debate. It is the result of two years of task force meetings in Connecticut and a year’s worth of collaboration among a bipartisan group of legislators from other states who are trying to prevent a patchwork of laws across the country because Congress has yet to act.
Article content
“I think that this is a very important bill for the state of Connecticut. It’s very important I think also for the country as a first step to get a bill like this,” said Democratic Sen. James Maroney, the key author of the bill. “Even if it were not to come and get passed into law this year, we worked together as states.”
Lawmakers from Connecticut, Colorado, Texas, Alaska, Georgia and Virginia who have been working together on the issue have found themselves in the middle of a national debate between civil rights-oriented groups and the industry over the core components of the legislation. Several of the legislators, including Maroney, participated in a news conference last week to emphasize the need for legislation and highlight how they have worked with industry, academia and advocates to create proposed regulations for safe and trustworthy AI.
But Senate Minority Leader Stephen Harding said he felt like Connecticut senators were being rushed to vote on the most complicated piece of legislation of the session, which is scheduled to adjourn May 8. The Republican said he feared the bill was “full of unintended consequences” that could prove detrimental to businesses and residents in the state.
Article content
“I think our constituents are owed more thought, more consideration to this before we push that button and say this is now going to become law,” he said.
Besides pushback from Republican legislators, some key Democrats in Connecticut, including Gov. Ned Lamont, have voiced concern the bill may negatively impact an emerging industry. Lamont, a former cable TV entrepreneur, “remains concerned that this is a fast-moving space, and that we need to make sure we do this right and don’t stymie innovation,” his spokesperson Julia Bergman said in a statement.
Among other things, the bill includes protections for consumers, tenants and employees by attempting to target risks of AI discrimination based on race, age, religion, disability and other protected classes. Besides making it a crime to spread so-called deepfake pornography and deceptive AI-generated media in political campaigns, the bill requires digital watermarks on AI-generated images for transparency.
Additionally, certain AI users will be required to develop policies and programs to eliminate risks of AI discrimination.
The legislation also creates a new online AI Academy where Connecticut residents can take classes in AI and ensures AI training is part of state workforce development initiatives and other state training programs. There are some concerns the bill doesn’t go far enough, with calls by advocates to restore a requirement that companies must disclose more information to consumers before they can use AI to make decisions about them.
The bill now awaits action in the House of Representatives.
Share this article in your social network