TRENDING
Implementing AI tools too quickly could lead to mistakes . This underscores the need for a balanced , informed approach to AI integration in cybersecurity , combining strategy with team training . With so much at stake both from a security and privacy perspective , it is wise not to move forwards without careful planning .
It is imperative we not only slow down , but also take steps to enhance cybersecurity across the entire lifecycle of AI development . This would include secure coding practices , scanning for vulnerabilities , conducting code reviews , employing static and dynamic analysis tools and performing regular security testing and validation .
Here are some other best practices that can help IT and cybersecurity leaders currently embedded in AI projects and AI systems .
An Accenture poll of more than 3,400 IT executives finds nearly three-quarters are exercising restraint with Generative AI investments .
# 1 Security across AI lifecycle
Today we are seeing Generative Artificial Intelligence harnessed to introduce more and more applications quickly and at scale . However , this has caused a serious security issue as there is a lack of security controls , with many containing vulnerabilities . To date , hundreds of vulnerabilities have been disclosed in AI applications , including Microsoft Copilot , Flowise , and Langflow found by the Tenable Research Team .
Threat actors will use attack vectors unique to AI systems , along with standard techniques to attack traditional IT systems .
An important step is to assign responsibility for the cybersecurity of AI to the same executive responsible for enterprise cybersecurity . Consider including security requirements of the AI system in the procurement contracts with vendors of AI products or services .
Follow best practices for the AI deployment environment , such as using hardened containers for running machine learning models ; monitoring networks ; applying allowlists on firewalls ; keeping hardware updated ; encrypting sensitive data ; and employing strong authentication and secure communication protocols .
# 2 Data leakage in AI
AI operates by analysing vast amounts of data . Ensuring the ethical use of this data and maintaining privacy standards is essential for protecting the enterprises intellectual property and other sensitive data , such as customers ’ personal data .
Organisations should establish clear guidelines around the use of data . This involves obtaining consent for data use , anonymising sensitive information , and being transparent about how AI models use data .
A recent study from Coleman Parkes Research found only one in 10 organisations has a reliable system in place to measure bias and privacy risk in large language models .
Accenture recommends business leaders keep the following in mind when it comes to risks around data-leakage . Recognise that the risk of unintentional transmission of confidential data through Generative AI applications is significant . Develop a custom front-end that interacts with the underlying language model API . Establish sandboxes that act as controlled environments where data is isolated .
Bernard Montel , EMEA Technical
Director and Security Strategist , Tenable
Without proper guidance , employees may not understand the risks of AI technology and this could lead to the emergence of shadow IT , posing new cybersecurity risks . A comprehensive workforce training programme needs to be implemented .
26 INTELLIGENTCIO MIDDLE EAST www . intelligentcio . com