A bill that would force ALL chatbots to inform users that it is not a human has cleared a House subcommittee

iowahouse-2

A bill that would force ALL chatbots to inform users that it is not a human has cleared a House subcommittee, but lawmakers intend to narrow the proposal. As currently written, it would require a chatbot to disclose it’s not a human at the start of an online interaction and to remind the user it’s not human every half hour after that. A company could be fined up to 100-thousand dollars each time its chatbot fails to make those statements. Republican Representative Austin Harris of Moulton — the bill’s sponsor — says chatbots are an unregulated new frontier in technology. “Artificial Intelligence chatbots, mental health chatbots posing as such are encouraging kids to commit suicide or do harmful things to themselves and so we’re bringing this bill forward to be able to start a discussion and see where it goes,” Harris said.

Lobbyists for Google, Verizon and other businesses told the subcommittee they’ll make suggestions for narrowing the bill, to ensure it doesn’t apply to “everyday business tools,” like chatbots that help someone find a new flight or schedule an oil change.