Home Technology Top Stories Business Most Featured Sports Social Issues Animals News Fashion Crypto Featured Music & Pop Culture Travel & Tourism

Local Brazilian Councilors Regret After Passing Law Written Entirely By AI

Author Avatar
By Christian Webster - - 5 Mins Read
The robotic arm signs a paper: gavel, law
Featured Photo | Shutterstock

Artificial intelligence development has made a significant breakthrough in Porto Alegre, a major Brazilian city. It has become the first city in Brazil to pass legislation that is entirely composed of artificial intelligence.

 

This unprecedented event has sparked both objections and inquiries into the role of AI in public policy.

 

Councillor Ramiro Rosario, the proponent behind this historic move, revealed that OpenAI's chatbot, ChatGPT, was responsible for drafting the bill.

 

Despite its pioneering nature, the bill garnered unanimous approval from the 36-member council and was enforced as local law on November 23.

 

Rosario initially approached ChatGPT to create a proposal to prevent taxpayers in Porto Alegre from covering the costs of replacing stolen water consumption meters.

 

The noteworthy aspect is that Rosario presented the proposal to his fellow council members without disclosing its AI-generated origin, fearing that revealing the truth beforehand would have jeopardized its chances of being put to a vote.

 

However, the councillor emphasized that it was unfair to the public to risk the project's approval based solely on its AI authorship.

 

With the emergence of chatbots like ChatGPT, the global conversation surrounding AI-powered technologies has intensified.

 

While these tools show promise as valuable assets, anxieties arise regarding the unintended consequences they may entail when assuming tasks traditionally performed by humans.

 

Hamilton Sossmeier, President of Porto Alegre's council, initially expressed concerns about this "dangerous precedent" after discovering Rosario's utilization of ChatGPT through the councillor's social media boasting.

Challenges and the Phenomenon of False Information

 

AI large language models, such as ChatGPT, operate on the principle of repeatedly predicting the next word in a sentence.

A robot, illustration of artificial intelligience
| Pixabay

This approach leaves room for the model to generate false information, also known as hallucinations.

 

Studies conducted by Google's Vectara indicate that even the most advanced GPT model tends to introduce false information in document summaries around 3% of the time, while less-refined models may exhibit rates as high as 27%.

 

Consequently, skepticism towards relying solely on AI for complex tasks arises, especially when it comes to interpreting legal principles and precedents.

The Need for Human Insight

 

Andrew Perlman, Dean at Suffolk University Law School, acknowledged the potential significance of ChatGPT's introduction, stating it could signify a more monumental shift than the advent of the internet itself.

 

However, he also cautioned against overreliance on AI due to its limitations. The machine learning system of ChatGPT may not always encompass the nuanced understanding and judgment required for in-depth legal analysis.

 

This raises concerns about the adequacy of AI when confronted with intricate legal scenarios.

 

Rosario's employment of ChatGPT in drafting legislation is not an isolated case; other lawmakers worldwide have also examined the capabilities of AI, but with varying degrees of success.

 

In Massachusetts, Senator Barry Finegold, a Democrat, enlisted ChatGPT's assistance in developing a bill to regulate artificial intelligence models, including ChatGPT.

 

Despite being filed earlier this year, the bill has yet to be voted on, highlighting the ongoing exploration of AI regulation on a global scale.

Share