Survey report reveals expert’s views about the establishment of Australia’s AI Safety Institute

A survey by the charity Good Ancestors, involving 139 AI safety and policy professionals, revealed that 90% view an overly “bureaucratic culture” as the primary deterrent to joining Australia’s new AI Safety Institute, and over half indicated that funding below $10 million would be unappealing. Despite these concerns, there is broad support for the government’s initiative to establish this hub of AI safety expertise. In particular, 88.8% of respondents said the AISI should either focus primarily on catastrophic risks or take a balanced approach. Autonomous systems (85.8%), cyber misuse (81.2%), and dual-use science (and chemical, biological, radiological, nuclear weapons) (79.8%) were frequently rated as very important or critical risk areas to work on.

Click here for the official article/release

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

AI was used to generate part or all of this content - more information

Also Read:  White House pledges to work with Congress on ‘sensible’ AI regulation