EDITOR’ S QUESTION
AS THE ADOPTION OF GENERATIVE AI SPREADS ACROSS ENTERPRISES AND SHADOW AI BEGINS TO EMERGE FROM VARIOUS DEPARTMENTS, HOW SHOULD USAGE OF DATA BE MANAGED TO ENSURE IT IS AVAILABLE FOR GENERATIVE AI USE CASES AND DOES NOT VIOLATE DATA PRIVACY AND DATA COMPLIANCE POLICIES?
With the growing adoption of Generative AI, data management and compliance has become a serious concern for enterprises today. If you simply layer AI on top of your existing technology stack, you are asking for trouble. This approach adds complexity, increases risk, and makes governance a nightmare. A better strategy is to rethink your stack entirely and move toward a more modular, building-block approach. Executives from Zoho, Cloud Box, NetApp, SoftServe share their answers.
SARAN B PARAMASIVAM, REGIONAL DIRECTOR, MIDDLE
EAST AND AFRICA, ZOHO
Implementing robust security measures like Data Minimisation and Anonymisation helps protect personal information and reduces exposure. While hosting AI models on private servers can be costly, it provides organisations with greater control over data security, mitigating risks associated with third-party platforms.
At the same time, security audits and vulnerability assessments must be conducted regularly to identify potential issues before they escalate.
It is imperative to also factor in the human element and ensure that employees are educated about the risks of using unapproved AI applications and sharing sensitive data on unsecured platforms.
By offering ongoing training to employees, it can help them understand safe AI practices and the importance of safeguarding proprietary information, understanding both legal and ethical implications.
This can happen through a combination of strong data governance, secure hosting, access control, and continuous employee education.
Clear data management policies should be established, such as tagging sensitive data and controlling access to it, to minimise risks.
32 INTELLIGENTCIO MIDDLE EAST www. intelligentcio. com