Artificial intelligence (AI) is rarely out of the news. This week is no exception, with the UK Government announcing the setup of an AI Safety Institute, and with leading AI companies starting to share their AI technology safety strategies. Notwithstanding the US issuing an Executive Order on ‘Safe, Secure and Trustworthiness AI’, particularly focused on new standards for AI Safety and Security.
Whilst this isn’t a permanent US law yet, the order places a high emphasis on the National Institute of Standards and Technology's (NIST) efforts to enhance existing guidelines for AI risk management and the ‘red team’ testing to identifying potential vulnerabilities. Skills is not lost within the executive order, as the US administration notes that individuals with AI skills can seek opportunities via the federal government on AI.gov. The significance of AI is growing across all professions, as the UK Gov announces its AI skills package.. The World Economic Forum argues we should all prepare our workforce for emerging roles, when considering the AI skills gap, by recognising commonalities between current and future skill requirements.
There has been a surge in AI governance initiatives across various levels, encompassing national and international government support, with multi-industry cooperation, which represents a logical extension of the swift adoption of AI and the industry's realignment around it. Including These established measures have laid the foundation for the participation in the Group of Seven's (G7's) recently released Guiding Principles and Code of Conduct on Artificial Intelligence.
The executive order also requires “the development of a National Security Memorandum that directs further actions on AI and security.” This is likely to support international views, announced later this week on the ethical use cases and risks associated with ‘Frontier AI’ use across government, military, and law enforcement.
The European Union (EU) is nearing the end of negotiations on its AI Act, and it's interesting to note how closely aligned its goals and objectives are with the U.S. Executive Order. Both call out the need for testing, and enhanced safety and security measures, with key privacy and transparency for consumers. However, a significant distinction exists: the EU AI Act is a legal framework with proposed penalties, while the Executive Order will need US federal government cross-party support and influence to legislate.
In partnership with the IAPP, QA have launched AI governance course aimed at people looking to start using AI in their business. To help them understand the governance, safety, security, privacy, and risk challenges around it. It’s not a generative AI engineering technical course, so it’s much more widely accessible for all audiences seeking to use and understand AI in their roles.
The Certified Artificial Governance Professional (AIGP) curriculum provides an overview of AI technology, survey of current law, international government AI safety guidance, cooperation, and strategies for risk management, security and safety considerations, privacy protection, bias and trustworthiness, and other topics. Designed to ensure AI is used in a safe way, an AIGP trained and certified professional will know how to implement and effectively communicate across teams the emerging best practices and rules for responsible management of the AI ecosystem.
In summary an AI governance professional will take the responsibility of steering the adoption of AI and its implementation in a way that minimises risk, enhancing business growth and opportunity, while ensuring safety and instilling trust.
Richard Beck
Richard is an experienced security professional, turned educator, with over 15 years in operational security roles. He is driven by a commitment to helping address immediate and longer-term cyber skills shortages and bring a more diverse range of individuals and experiences into cyber through eco-system collaboration.More articles by Richard
Securing the Supply Chain: Embracing Zero Trust for Digital Trust
QA's Director of Cyber Security, Richard Beck, looks into the adoption of Zero Trust in the Cyber Security supply chain.
18 January 2024How AI-Powered Cyber Range Elevates Teamworking Success
QA's Director of Cyber Security, Richard Beck, takes a look at the benefits of utilising AI in Cyber Ranges, including collaboration and teamwork.
02 November 20238 Benefits of Converged OT Cybersecurity
With the number of cyber attacks on the rise, QA's Director of Cyber Security, Richard Beck, lists the key benefits of OT Cybersecurity.
06 October 2023Is Your Business Quantum Safe?
QA's Director of Cyber Security, Richard Beck, looks into the impact that quantum science and technologies will have on businesses.
06 October 2023The Future of Cyber-Enabled Fraud
Deepfake, biometrics and artificial intelligence, QA's Cyber Practice Director, Richard Beck, takes a look at the future of cyber-enabled fraud.
15 March 2023Cyber Pulse: Edition 144 | 5 February 2021
Read the latest edition of Cyber Pulse: Microsoft Office 365 attacks sparked from Google Firebase, Otorio releases open-source tool for hardening commonly used HMI/S…
05 February 2021Cyber Pulse: Edition 146 | 4 March 2021
Read the latest edition of Cyber Pulse: Ransomware gang hacks Ecuador's largest private bank, Ministry of Finance, Amazon dismisses claims Alexa "skills" can bypass…
04 March 2021Cyber Pulse: Edition 154 | 14 June 2021
In this edition of Cyber Pulse: Volkswagen discloses data breach impacting 3.3 million, nuclear weapons subcontractor hit by cyber attack, industrial automation gian…
14 June 2021Cyber Pulse: Edition 121 | 21 July 2020
Read the latest edition of Cyber Pulse: Critical ‘wormable’ vulnerability in Microsoft’s Windows DNS Server, Twitter breach: 130 high-profile accounts hacked, Cozy B…
14 July 2020Cyber Pulse: Edition 128 | 8 September 2020
Read the latest edition of Cyber Pulse: New White House principles to protect cyber assets in space, Newcastle University attacked, Cisco Jabber's security flaw, a c…
09 September 2020