As in: Yes there should be regulations.
People are running experiments with AIs for instance deciding on investments. They are putting up say $100 and then "lets see what happens". Currently there usually is still a human in the loop. Eventually the AI might become good enough to make money for itself. Then, if you interface the AI with say the stock exchange order computer, the AI would be able to make money for itself.
People are giving say $100 and then asking the AI what to do with it. Currently those are "funny stories", but given that AIs sometimes "turn bad" really quickly (microsoft twitter bot), it is something to be worried about.
Once you give an AI actuators that allow it to do stuff to the outside world, things could go south really quickly. Say someone uses AWS to run an AI and allows it to manage some money. Now when it decides A) that it has enough money B) that it doesn't want to get turned off, it might decide to pay for an AWS server itself and clone itself. Now when you see things going south and pull the plug... the clone knows you killed its father and might be kind of mad at you...
Sure, chatGPT is not yet capable of this level of consciousness, reasoning, feelings etc. But how far is that away? 1 year? 5 years? 10 maybe? By then it might be too late to start thinking about these things.