Tech Leaders sound alarm: AI development must be regulated now

World
2023-03-30 | 11:28
High views
Share
LBCI
Share
LBCI
Whatsapp
facebook
Twitter
Messenger
telegram
telegram
print
Tech Leaders sound alarm: AI development must be regulated now
Whatsapp
facebook
Twitter
Messenger
telegram
telegram
print
2min
Tech Leaders sound alarm: AI development must be regulated now

In 1973, the first Hollywood film featuring robots, Westworld, portrayed the consequences of humans losing control over the machines they created.

Today, after the launch of the GPT-4 system and the advancement of artificial intelligence (AI), this science fiction scenario has become a topic of serious discussion.

Recently, people have witnessed the capabilities of AI, including its powerful thinking and analytical abilities. However, it has also created fake images, recordings, and videos that have flooded social media. The potential consequences of unregulated AI development are vast and could change the planet.

The Future of Life Institute, an organization focused on addressing the threats posed by AI, has released a letter signed by over 1,100 experts, including researchers, academics, and technology workers. The letter calls for a six-month suspension on AI development beyond the latest update to the GPT system. During this period, governments and companies should establish general policies and controls.

Elon Musk, the founder of Tesla and SpaceX, Twitter CEO, and Steve Wozniak, a co-founder of Apple, were among those who signed the letter.

Furthermore, the letter included the following questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization?”

They argued that such decisions about the development of AI should not be delegated to unelected tech leaders. 

However, the letter was not signed by Sam Altman, the CEO of OpenAI, a leading AI research company today, nor by Google CEO Sundar Pichai or Microsoft CEO Satya Nadella.

The question remains whether we are facing a real and serious danger or if some companies are simply trying to catch up with their competitors. Nevertheless, there is no doubt that AI development needs to be regulated to ensure safety and prevent potential negative consequences.

Breaking Headlines

World

News Bulletin Reports

Tech

Technology

Tech Leaders

AI

Development

LBCI Next
Ukraine says Russian forces make progress in frontline city of Bakhmut
Riyadh joins Shanghai Cooperation Organization as ties with Beijing grow
LBCI Previous
Download now the LBCI mobile app
To see the latest news, the latest daily programs in Lebanon and the world
Google Play
App Store
We use
cookies
We use cookies to make
your experience on this
website better.
Accept
Learn More