Last week, I went on the CBC News podcast “Nothing Is Foreign” to speak in regards to the draft regulation—and what it means for the Chinese authorities to take such fast motion on a still-very-new expertise.
As I mentioned within the podcast, I see the draft regulation as a combination of smart restrictions on AI dangers and a continuation of China’s robust authorities custom of aggressive intervention within the tech trade.
Many of the clauses within the draft regulation are rules that AI critics are advocating for within the West: information used to coach generative AI fashions shouldn’t infringe on mental property or privateness; algorithms shouldn’t discriminate towards customers on the premise of race, ethnicity, age, gender, and different attributes; AI corporations needs to be clear about how they obtained coaching information and the way they employed people to label the info.
At the identical time, there are guidelines that different international locations would seemingly balk at. The authorities is asking that individuals who use these generative AI instruments register with their actual identification—simply as on any social platform in China. The content material that AI software program generates also needs to “reflect the core values of socialism.”
Neither of those necessities is shocking. The Chinese authorities has regulated tech corporations with a powerful hand in recent times, punishing platforms for lax moderation and incorporating new merchandise into the established censorship regime.
The doc makes that regulatory custom straightforward to see: there may be frequent point out of different guidelines which have handed in China, on private information, algorithms, deepfakes, cybersecurity, and so forth. In some methods, it feels as if these discrete paperwork are slowly forming an online of guidelines that assist the federal government course of new challenges within the tech period.
The undeniable fact that the Chinese authorities can react so shortly to a brand new tech phenomenon is a double-edged sword. The energy of this method, which appears at each new tech pattern individually, “is its precision, creating specific remedies for specific problems,” wrote Matt Sheehan, a fellow on the Carnegie Endowment for International Peace. “The weakness is its piecemeal nature, with regulators forced to draw up new regulations for new applications or problems.” If the federal government is busy taking part in whack-a-mole with new guidelines, it might miss the chance to assume strategically a few long-term imaginative and prescient on AI. We can distinction this method with that of the EU, which has been engaged on a “hugely ambitious” AI Act for years, as my colleague Melissa not too long ago defined. (A latest revision of the AI Act draft included rules on generative AI.)